Search results

1 – 10 of 422
Article
Publication date: 22 August 2023

Umar Saba Dangana and Namnso Bassey Udoekanem

The rising concern for the accuracy of residential valuations in Nigeria has created the need for key stakeholders in the residential property markets in the study areas to know…

Abstract

Purpose

The rising concern for the accuracy of residential valuations in Nigeria has created the need for key stakeholders in the residential property markets in the study areas to know the level of accuracy of valuations in order to make rational residential property transactions, amongst other purposes.

Design/methodology/approach

A blend of descriptive and causal designs was adopted for the study. Data were collected via structured questionnaire administered to 179 estate surveying and valuation (ESV) firms in the study areas using census sampling technique. Analytical techniques such as median percentage error (PE), mean and relative importance index (RII) analysis were employed in the analysis of data collected for the study.

Findings

The study found that valuation accuracy is greater in the residential property market in Abuja than in Minna, with inappropriate valuation methodology as the most significant cause of valuation inaccuracy.

Practical implications

The practical implication of this study is that a reliable databank should be established for the property market to provide credible transaction data for valuers to conduct accurate valuations in these cities. Strict enforcement of national and international valuation standards by the regulatory authorities as well as retraining of valuers on appropriate application of valuation approaches and methods are the recommended corrective measures.

Originality/value

No study has comparatively examined the accuracy of valuations in two extremely different residential property markets in the country using actual valuation and transaction prices.

Details

Property Management, vol. 42 no. 2
Type: Research Article
ISSN: 0263-7472

Keywords

Article
Publication date: 12 April 2024

Tongzheng Pu, Chongxing Huang, Haimo Zhang, Jingjing Yang and Ming Huang

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory…

Abstract

Purpose

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory expertise and neural network technology can bring a fresh perspective to international migration forecasting research.

Design/methodology/approach

This study proposes a conditional generative adversarial neural network model incorporating the migration knowledge – conditional generative adversarial network (MK-CGAN). By using the migration knowledge to design the parameters, MK-CGAN can effectively address the limited data problem, thereby enhancing the accuracy of migration forecasts.

Findings

The model was tested by forecasting migration flows between different countries and had good generalizability and validity. The results are robust as the proposed solutions can achieve lesser mean absolute error, mean squared error, root mean square error, mean absolute percentage error and R2 values, reaching 0.9855 compared to long short-term memory (LSTM), gated recurrent unit, generative adversarial network (GAN) and the traditional gravity model.

Originality/value

This study is significant because it demonstrates a highly effective technique for predicting international migration using conditional GANs. By incorporating migration knowledge into our models, we can achieve prediction accuracy, gaining valuable insights into the differences between various model characteristics. We used SHapley Additive exPlanations to enhance our understanding of these differences and provide clear and concise explanations for our model predictions. The results demonstrated the theoretical significance and practical value of the MK-CGAN model in predicting international migration.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 17 April 2024

Jahanzaib Alvi and Imtiaz Arif

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Abstract

Purpose

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Design/methodology/approach

Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.

Findings

The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.

Research limitations/implications

Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.

Originality/value

This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1019

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

Open Access
Article
Publication date: 12 January 2024

Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Abstract

Purpose

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Design/methodology/approach

A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.

Findings

The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.

Practical implications

The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.

Originality/value

The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?

Details

International Journal of Operations & Production Management, vol. 44 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

Article
Publication date: 14 December 2023

Huaxiang Song, Chai Wei and Zhou Yong

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…

Abstract

Purpose

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.

Design/methodology/approach

This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.

Findings

This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.

Originality/value

This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 19 April 2024

Jitendra Gaur, Kumkum Bharti and Rahul Bajaj

Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by…

Abstract

Purpose

Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by introducing an ensemble attribution model to optimize marketing budget allocation for online marketing channels. As empirical research, this study demonstrates the supremacy of the ensemble model over standalone models.

Design/methodology/approach

The transactional data set for car insurance from an Indian insurance aggregator is used in this empirical study. The data set contains information from more than three million platform visitors. A robust ensemble model is created by combining results from two probabilistic models, namely, the Markov chain model and the Shapley value. These results are compared and validated with heuristic models. Also, the performances of online marketing channels and attribution models are evaluated based on the devices used (i.e. desktop vs mobile).

Findings

Channel importance charts for desktop and mobile devices are analyzed to understand the top contributing online marketing channels. Customer relationship management-emailers and Google cost per click a paid advertising is identified as the top two marketing channels for desktop and mobile channels. The research reveals that ensemble model accuracy is better than the standalone model, that is, the Markov chain model and the Shapley value.

Originality/value

To the best of the authors’ knowledge, the current research is the first of its kind to introduce ensemble modeling for solving attribution problems in online marketing. A comparison with heuristic models using different devices (desktop and mobile) offers insights into the results with heuristic models.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 16 April 2024

Jinwei Zhao, Shuolei Feng, Xiaodong Cao and Haopei Zheng

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and…

Abstract

Purpose

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and systems developed specifically for monitoring health and fitness metrics.

Design/methodology/approach

In recent decades, wearable sensors for monitoring vital signals in sports and health have advanced greatly. Vital signals include electrocardiogram, electroencephalogram, electromyography, inertial data, body motions, cardiac rate and bodily fluids like blood and sweating, making them a good choice for sensing devices.

Findings

This report reviewed reputable journal articles on wearable sensors for vital signal monitoring, focusing on multimode and integrated multi-dimensional capabilities like structure, accuracy and nature of the devices, which may offer a more versatile and comprehensive solution.

Originality/value

The paper provides essential information on the present obstacles and challenges in this domain and provide a glimpse into the future directions of wearable sensors for the detection of these crucial signals. Importantly, it is evident that the integration of modern fabricating techniques, stretchable electronic devices, the Internet of Things and the application of artificial intelligence algorithms has significantly improved the capacity to efficiently monitor and leverage these signals for human health monitoring, including disease prediction.

Details

Sensor Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 April 2024

Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…

Abstract

Purpose

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.

Design/methodology/approach

This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.

Findings

In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.

Originality/value

In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of 422