Search results

1 – 10 of over 1000
Article
Publication date: 12 April 2024

Tongzheng Pu, Chongxing Huang, Haimo Zhang, Jingjing Yang and Ming Huang

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory…

Abstract

Purpose

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory expertise and neural network technology can bring a fresh perspective to international migration forecasting research.

Design/methodology/approach

This study proposes a conditional generative adversarial neural network model incorporating the migration knowledge – conditional generative adversarial network (MK-CGAN). By using the migration knowledge to design the parameters, MK-CGAN can effectively address the limited data problem, thereby enhancing the accuracy of migration forecasts.

Findings

The model was tested by forecasting migration flows between different countries and had good generalizability and validity. The results are robust as the proposed solutions can achieve lesser mean absolute error, mean squared error, root mean square error, mean absolute percentage error and R2 values, reaching 0.9855 compared to long short-term memory (LSTM), gated recurrent unit, generative adversarial network (GAN) and the traditional gravity model.

Originality/value

This study is significant because it demonstrates a highly effective technique for predicting international migration using conditional GANs. By incorporating migration knowledge into our models, we can achieve prediction accuracy, gaining valuable insights into the differences between various model characteristics. We used SHapley Additive exPlanations to enhance our understanding of these differences and provide clear and concise explanations for our model predictions. The results demonstrated the theoretical significance and practical value of the MK-CGAN model in predicting international migration.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 12 April 2024

Youwei Li and Jian Qu

The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous…

Abstract

Purpose

The purpose of this research is to achieve multi-task autonomous driving by adjusting the network architecture of the model. Meanwhile, after achieving multi-task autonomous driving, the authors found that the trained neural network model performs poorly in untrained scenarios. Therefore, the authors proposed to improve the transfer efficiency of the model for new scenarios through transfer learning.

Design/methodology/approach

First, the authors achieved multi-task autonomous driving by training a model combining convolutional neural network and different structured long short-term memory (LSTM) layers. Second, the authors achieved fast transfer of neural network models in new scenarios by cross-model transfer learning. Finally, the authors combined data collection and data labeling to improve the efficiency of deep learning. Furthermore, the authors verified that the model has good robustness through light and shadow test.

Findings

This research achieved road tracking, real-time acceleration–deceleration, obstacle avoidance and left/right sign recognition. The model proposed by the authors (UniBiCLSTM) outperforms the existing models tested with model cars in terms of autonomous driving performance. Furthermore, the CMTL-UniBiCL-RL model trained by the authors through cross-model transfer learning improves the efficiency of model adaptation to new scenarios. Meanwhile, this research proposed an automatic data annotation method, which can save 1/4 of the time for deep learning.

Originality/value

This research provided novel solutions in the achievement of multi-task autonomous driving and neural network model scenario for transfer learning. The experiment was achieved on a single camera with an embedded chip and a scale model car, which is expected to simplify the hardware for autonomous driving.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 14 December 2023

Huaxiang Song, Chai Wei and Zhou Yong

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…

Abstract

Purpose

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.

Design/methodology/approach

This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.

Findings

This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.

Originality/value

This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 April 2022

Pierre Jouan and Pierre Hallot

The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information…

Abstract

Purpose

The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information systems (HIS). The authors propose to provide experts in the field with a dedicated framework to structure and integrate targeted data about historical objects' significance in such environments.

Design/methodology/approach

This research seeks the identification of key indicators which allow to better inform decision-makers about cultural significance. Identified concepts are formalized in a data structure through conceptual data modeling, taking advantage on unified modeling language (HIS). The design science research (DSR) method is implemented to facilitate the development of the data model.

Findings

This paper proposes a practical solution for the formalization of data related to the significance of objects in HIS. The authors end up with a data model which enables multiple knowledge representations through data analysis and information retrieval.

Originality/value

The framework proposed in this article supports a more sustainable vision of heritage preservation as the framework enhances the involvement of all stakeholders in the conservation and management of historical sites. The data model supports explicit communications of the significance of historical objects and strengthens the synergy between the stakeholders involved in different phases of the conservation process.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 14 no. 3
Type: Research Article
ISSN: 2044-1266

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2845

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 7 December 2022

Peyman Jafary, Davood Shojaei, Abbas Rajabifard and Tuan Ngo

Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different…

Abstract

Purpose

Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different stages of the building lifecycle. Real estate valuation, as a fully interconnected field with the AEC industry, can benefit from 3D technical achievements in BIM technologies. Some studies have attempted to use BIM for real estate valuation procedures. However, there is still a limited understanding of appropriate mechanisms to utilize BIM for valuation purposes and the consequent impact that BIM can have on decreasing the existing uncertainties in the valuation methods. Therefore, the paper aims to analyze the literature on BIM for real estate valuation practices.

Design/methodology/approach

This paper presents a systematic review to analyze existing utilizations of BIM for real estate valuation practices, discovers the challenges, limitations and gaps of the current applications and presents potential domains for future investigations. Research was conducted on the Web of Science, Scopus and Google Scholar databases to find relevant references that could contribute to the study. A total of 52 publications including journal papers, conference papers and proceedings, book chapters and PhD and master's theses were identified and thoroughly reviewed. There was no limitation on the starting date of research, but the end date was May 2022.

Findings

Four domains of application have been identified: (1) developing machine learning-based valuation models using the variables that could directly be captured through BIM and industry foundation classes (IFC) data instances of building objects and their attributes; (2) evaluating the capacity of 3D factors extractable from BIM and 3D GIS in increasing the accuracy of existing valuation models; (3) employing BIM for accurate estimation of components of cost approach-based valuation practices; and (4) extraction of useful visual features for real estate valuation from BIM representations instead of 2D images through deep learning and computer vision.

Originality/value

This paper contributes to research efforts on utilization of 3D modeling in real estate valuation practices. In this regard, this paper presents a broad overview of the current applications of BIM for valuation procedures and provides potential ways forward for future investigations.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 4
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1018

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Article
Publication date: 19 April 2024

Jitendra Gaur, Kumkum Bharti and Rahul Bajaj

Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by…

Abstract

Purpose

Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by introducing an ensemble attribution model to optimize marketing budget allocation for online marketing channels. As empirical research, this study demonstrates the supremacy of the ensemble model over standalone models.

Design/methodology/approach

The transactional data set for car insurance from an Indian insurance aggregator is used in this empirical study. The data set contains information from more than three million platform visitors. A robust ensemble model is created by combining results from two probabilistic models, namely, the Markov chain model and the Shapley value. These results are compared and validated with heuristic models. Also, the performances of online marketing channels and attribution models are evaluated based on the devices used (i.e. desktop vs mobile).

Findings

Channel importance charts for desktop and mobile devices are analyzed to understand the top contributing online marketing channels. Customer relationship management-emailers and Google cost per click a paid advertising is identified as the top two marketing channels for desktop and mobile channels. The research reveals that ensemble model accuracy is better than the standalone model, that is, the Markov chain model and the Shapley value.

Originality/value

To the best of the authors’ knowledge, the current research is the first of its kind to introduce ensemble modeling for solving attribution problems in online marketing. A comparison with heuristic models using different devices (desktop and mobile) offers insights into the results with heuristic models.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 16 April 2024

Ikhsan A. Fattah

This research investigates the critical role of data governance (DG) in shaping a data-driven culture (DDC) within organizations, recognizing the transformative potential of data…

Abstract

Purpose

This research investigates the critical role of data governance (DG) in shaping a data-driven culture (DDC) within organizations, recognizing the transformative potential of data utilization for efficiency, opportunities, and productivity. The study delves into the influence of DG on DDC, emphasizing the mediating effect of data literacy (DL).

Design/methodology/approach

The study empirically assesses 125 experienced managers in Indonesian public service sector organizations using a quantitative approach. Structural Equation Modeling (SEM) analysis was chosen to examine the impact of DG on DDC and the mediating effects of DL on this relationship.

Findings

The findings highlight that both DG and DL serve as antecedents to DDC, with DL identified as a crucial mediator, explaining a significant portion of the effects between DG and DDC.

Research limitations/implications

Beyond unveiling these relationships, the study discusses practical implications for organizational leaders and managers, emphasizing the need for effective policies and strategies in data-driven decision-making.

Originality/value

This research fills an important research gap by introducing an original model and providing empirical evidence on the dynamic interplay between DG, DL, and DDC, contributing to the evolving landscape of data-driven organizational cultures.

Details

Industrial Management & Data Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 15 December 2023

Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…

Abstract

Purpose

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.

Design/methodology/approach

The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.

Findings

The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.

Practical implications

The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.

Originality/value

This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.

Details

International Journal of Productivity and Performance Management, vol. 73 no. 11
Type: Research Article
ISSN: 1741-0401

Keywords

1 – 10 of over 1000