Search results

1 – 10 of 275
Article
Publication date: 22 April 2022

Sreedhar Jyothi and Geetanjali Nelloru

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the…

Abstract

Purpose

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the electrocardiogram (ECG). In order to identify cardiac anomalies, ECG signals analyse the heart's electrical activity and show output in the form of waveforms. Patients with these disorders must be identified as soon as possible. ECG signals can be difficult, time-consuming and subject to inter-observer variability when inspected manually.

Design/methodology/approach

There are various forms of arrhythmias that are difficult to distinguish in complicated non-linear ECG data. It may be beneficial to use computer-aided decision support systems (CAD). It is possible to classify arrhythmias in a rapid, accurate, repeatable and objective manner using the CAD, which use machine learning algorithms to identify the tiny changes in cardiac rhythms. Cardiac infractions can be classified and detected using this method. The authors want to categorize the arrhythmia with better accurate findings in even less computational time as the primary objective. Using signal and axis characteristics and their association n-grams as features, this paper makes a significant addition to the field. Using a benchmark dataset as input to multi-label multi-fold cross-validation, an experimental investigation was conducted.

Findings

This dataset was used as input for cross-validation on contemporary models and the resulting cross-validation metrics have been weighed against the performance metrics of other contemporary models. There have been few false alarms with the suggested model's high sensitivity and specificity.

Originality/value

The results of cross validation are significant. In terms of specificity, sensitivity, and decision accuracy, the proposed model outperforms other contemporary models.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 3 September 2024

Biplab Bhattacharjee, Kavya Unni and Maheshwar Pratap

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This…

Abstract

Purpose

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This study aims to evaluate different genres of classifiers for product return chance prediction, and further optimizes the best performing model.

Design/methodology/approach

An e-commerce data set having categorical type attributes has been used for this study. Feature selection based on chi-square provides a selective features-set which is used as inputs for model building. Predictive models are attempted using individual classifiers, ensemble models and deep neural networks. For performance evaluation, 75:25 train/test split and 10-fold cross-validation strategies are used. To improve the predictability of the best performing classifier, hyperparameter tuning is performed using different optimization methods such as, random search, grid search, Bayesian approach and evolutionary models (genetic algorithm, differential evolution and particle swarm optimization).

Findings

A comparison of F1-scores revealed that the Bayesian approach outperformed all other optimization approaches in terms of accuracy. The predictability of the Bayesian-optimized model is further compared with that of other classifiers using experimental analysis. The Bayesian-optimized XGBoost model possessed superior performance, with accuracies of 77.80% and 70.35% for holdout and 10-fold cross-validation methods, respectively.

Research limitations/implications

Given the anonymized data, the effects of individual attributes on outcomes could not be investigated in detail. The Bayesian-optimized predictive model may be used in decision support systems, enabling real-time prediction of returns and the implementation of preventive measures.

Originality/value

There are very few reported studies on predicting the chance of order return in e-businesses. To the best of the authors’ knowledge, this study is the first to compare different optimization methods and classifiers, demonstrating the superiority of the Bayesian-optimized XGBoost classification model for returns prediction.

Details

Journal of Systems and Information Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 20 May 2024

Qifeng Wang, Bofan Lin and Consilz Tan

The purpose of this paper is to develop an index for measuring urban house price affordability that integrates spatial considerations and to explore the drivers of housing…

53

Abstract

Purpose

The purpose of this paper is to develop an index for measuring urban house price affordability that integrates spatial considerations and to explore the drivers of housing affordability using the post-least absolute shrinkage and selection operator (LASSO) approach and the ordinary least squares method of regression analysis.

Design/methodology/approach

The study is based on time-series data collected from 2005 to 2021 for 256 prefectural-level city districts in China. The new urban spatial house-to-price ratio introduced in this study adds the consideration of commuting costs due to spatial endowment compared to the traditional house-to-price ratio. And compared with the use of ordinary economic modelling methods, this study adopts the post-LASSO variable selection approach combined with the k-fold cross-test model to identify the most important drivers of housing affordability, thus better solving the problems of multicollinearity and overfitting.

Findings

Urban macroeconomics environment and government regulations have varying degrees of influence on housing affordability in cities. Among them, gross domestic product is the most important influence.

Research limitations/implications

The paper provides important implications for policymakers, real estate professionals and researchers. For example, policymakers will be able to design policies that target the most influential factors of housing affordability in their region.

Originality/value

This study introduces a new urban spatial house price-to-income ratio, and it examines how macroeconomic indicators, government regulation, real estate market supply and urban infrastructure level have a significant impact on housing affordability. The problem of having too many variables in the decision-making process is minimized through the post-LASSO methodology, which varies the parameters of the model to allow for the ranking of the importance of the variables. As a result, this approach allows policymakers and stakeholders in the real estate market more flexibility in determining policy interventions. In addition, through the k-fold cross-validation methodology, the study ensures a high degree of accuracy and credibility when using drivers to predict housing affordability.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 3 September 2024

Fatemeh Ehsani and Monireh Hosseini

As internet banking service marketing platforms continue to advance, customers exhibit distinct behaviors. Given the extensive array of options and minimal barriers to switching…

Abstract

Purpose

As internet banking service marketing platforms continue to advance, customers exhibit distinct behaviors. Given the extensive array of options and minimal barriers to switching to competitors, the concept of customer churn behavior has emerged as a subject of considerable debate. This study aims to delineate the scope of feature optimization methods for elucidating customer churn behavior within the context of internet banking service marketing. To achieve this goal, the author aims to predict the attrition and migration of customers who use internet banking services using tree-based classifiers.

Design/methodology/approach

The author used various feature optimization methods in tree-based classifiers to predict customer churn behavior using transaction data from customers who use internet banking services. First, the authors conducted feature reduction to eliminate ineffective features and project the data set onto a lower-dimensional space. Next, the author used Recursive Feature Elimination with Cross-Validation (RFECV) to extract the most practical features. Then, the author applied feature importance to assign a score to each input feature. Following this, the author selected C5.0 Decision Tree, Random Forest, XGBoost, AdaBoost, CatBoost and LightGBM as the six tree-based classifier structures.

Findings

This study acclaimed that transaction data is a reliable resource for elucidating customer churn behavior within the context of internet banking service marketing. Experimental findings highlight the operational benefits and enhanced customer retention afforded by implementing feature optimization and leveraging a variety of tree-based classifiers. The results indicate the significance of feature reduction, feature selection and feature importance as the three feature optimization methods in comprehending customer churn prediction. This study demonstrated that feature optimization can improve this prediction by increasing the accuracy and precision of tree-based classifiers and decreasing their error rates.

Originality/value

This research aims to enhance the understanding of customer behavior on internet banking service platforms by predicting churn intentions. This study demonstrates how feature optimization methods influence customer churn prediction performance. This approach included feature reduction, feature selection and assessing feature importance to optimize transaction data analysis. Additionally, the author performed feature optimization within tree-based classifiers to improve performance. The novelty of this approach lies in combining feature optimization methods with tree-based classifiers to effectively capture and articulate customer churn experience in internet banking service marketing.

Details

Journal of Services Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0887-6045

Keywords

Article
Publication date: 25 June 2024

Junseo Bae

The main objectives of this study are to (1) develop and test a cost contingency learning model that can generalize initially estimated contingency amounts by analyzing back the…

Abstract

Purpose

The main objectives of this study are to (1) develop and test a cost contingency learning model that can generalize initially estimated contingency amounts by analyzing back the multiple project changes experienced and (2) uncover the hidden link of the learning networks using a curve-fitting technique for the post-construction evaluation of cost contingency amounts to cover cost risk for future projects.

Design/methodology/approach

Based on a total of 1,434 datapoints collected from DBB and DB transportation projects, a post-construction cost contingency learning model was developed using feedforward neural networks (FNNs). The developed model generalizes cost contingencies under two different project delivery methods (i.e. DBB and DB). The learning outputs of generalized contingency amounts were curve-fitted with the post-construction schedule and cost information, specifically aiming at uncovering the hidden link of the FNNs. Two different bridge projects completed under DBB and DB were employed as illustrative examples to demonstrate how the proposed modeling framework could be implemented.

Findings

With zero or negative values of change growth experienced, it was concluded that cost contingencies were overallocated at the contract stage. On the other hand, with positive values of change growth experienced, it was evaluated that set cost contingencies were insufficient from the post-construction standpoint. Taken together, this study proposed a tangible post-construction evaluation technique that can produce not only the plausible ranges of cost contingencies but also the exact amounts of contingency under DBB and DB contracts.

Originality/value

As the first of its kind, the proposed modeling framework provides agency engineers and decision-makers with tangible assessments of cost contingency coupled with experienced risks at the post-construction stage. Use of the proposed model will help them evaluate the allocation of appropriate contingency amounts. If an agency allocates a cost contingency benchmarked from similar projects on aspects of the base estimate and experienced risks, a set contingency can be defended more reliably. The main findings of this study contribute to post-construction cost contingency verification, enabling agency engineers and decision-makers to systematically evaluate set cost contingencies during the post-construction assessment stage and achieving further any enhanced level of confidence for future cost contingency plans.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 6 June 2024

Bingzi Jin and Xiaojie Xu

The purpose of this study is to make property price forecasts for the Chinese housing market that has grown rapidly in the last 10 years, which is an important concern for both…

Abstract

Purpose

The purpose of this study is to make property price forecasts for the Chinese housing market that has grown rapidly in the last 10 years, which is an important concern for both government and investors.

Design/methodology/approach

This study examines Gaussian process regressions with different kernels and basis functions for monthly pre-owned housing price index estimates for ten major Chinese cities from March 2012 to May 2020. The authors do this by using Bayesian optimizations and cross-validation.

Findings

The ten price indices from June 2019 to May 2020 are accurately predicted out-of-sample by the established models, which have relative root mean square errors ranging from 0.0458% to 0.3035% and correlation coefficients ranging from 93.9160% to 99.9653%.

Originality/value

The results might be applied separately or in conjunction with other forecasts to develop hypotheses regarding the patterns in the pre-owned residential real estate price index and conduct further policy research.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

Open Access
Article
Publication date: 9 May 2022

Khalid Iqbal and Muhammad Shehrayar Khan

In this digital era, email is the most pervasive form of communication between people. Many users become a victim of spam emails and their data have been exposed.

11089

Abstract

Purpose

In this digital era, email is the most pervasive form of communication between people. Many users become a victim of spam emails and their data have been exposed.

Design/methodology/approach

Researchers contribute to solving this problem by a focus on advanced machine learning algorithms and improved models for detecting spam emails but there is still a gap in features. To achieve good results, features also play an important role. To evaluate the performance of applied classifiers, 10-fold cross-validation is used.

Findings

The results approve that the spam emails are correctly classified with the accuracy of 98.00% for the Support Vector Machine and 98.06% for the Artificial Neural Network as compared to other applied machine learning classifiers.

Originality/value

In this paper, Point-Biserial correlation is applied to each feature concerning the class label of the University of California Irvine (UCI) spambase email dataset to select the best features. Extensive experiments are conducted on selected features by training the different classifiers.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 April 2021

Arunit Maity, P. Prakasam and Sarthak Bhargava

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is…

1430

Abstract

Purpose

Due to the continuous and rapid evolution of telecommunication equipment, the demand for more efficient and noise-robust detection of dual-tone multi-frequency (DTMF) signals is most significant.

Design/methodology/approach

A novel machine learning-based approach to detect DTMF tones affected by noise, frequency and time variations by employing the k-nearest neighbour (KNN) algorithm is proposed. The features required for training the proposed KNN classifier are extracted using Goertzel's algorithm that estimates the absolute discrete Fourier transform (DFT) coefficient values for the fundamental DTMF frequencies with or without considering their second harmonic frequencies. The proposed KNN classifier model is configured in four different manners which differ in being trained with or without augmented data, as well as, with or without the inclusion of second harmonic frequency DFT coefficient values as features.

Findings

It is found that the model which is trained using the augmented data set and additionally includes the absolute DFT values of the second harmonic frequency values for the eight fundamental DTMF frequencies as the features, achieved the best performance with a macro classification F1 score of 0.980835, a five-fold stratified cross-validation accuracy of 98.47% and test data set detection accuracy of 98.1053%.

Originality/value

The generated DTMF signal has been classified and detected using the proposed KNN classifier which utilizes the DFT coefficient along with second harmonic frequencies for better classification. Additionally, the proposed KNN classifier has been compared with existing models to ascertain its superiority and proclaim its state-of-the-art performance.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 23 August 2024

Libiao Bai, Shiyi Liu, Yuqin An and Qi Xie

Project portfolio benefit (PPB) evaluation is crucial for project portfolio management decisions. However, PPB is complex in composition and affected by synergy and ambidexterity…

Abstract

Purpose

Project portfolio benefit (PPB) evaluation is crucial for project portfolio management decisions. However, PPB is complex in composition and affected by synergy and ambidexterity. Ignoring these characteristics can result in inaccurate assessments, impeding the management and optimization of benefit. Considering the above complexity of PPB evaluation, this study aims to propose a refined PPB evaluation model to provide decision support for organizations.

Design/methodology/approach

A back propagation neural network optimized via genetic algorithm and pruning algorithm (P-GA-BPNN) is constructed for PPB evaluation. First, the benefit evaluation criteria are established. Second, the inputs and expected outputs for model training and testing are determined. Then, based on the optimization of BPNN via genetic algorithm and pruning algorithm, a PPB evaluation model is constructed considering the impacts of ambidexterity and synergy on PPB. Finally, a numerical example was applied to validate the model.

Findings

The results indicate that the proposed model can be used for effective PPB evaluation. Moreover, it shows superiority in terms of MSE and fitting effect through extensive comparative experiments with BPNN, GA-BPNN, and SVM models. The robustness of the model is also demonstrated via data random disturbance experiment and 10-cross-validation. Therefore, the proposed model could serve as a valuable decision-making tool for PPB management.

Originality/value

This study extends prior research by integrating the impacts of synergy and ambidexterity on PPB when conducting PPB evaluation, which facilitates to manage and enhance PPB. Besides, the structural redundancy of existing assessment methods is solved through the dynamic optimization of the network structure via the pruning algorithm, enhancing the effectiveness of PPB decision-making tools.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 10 July 2024

Wiput Tuvayanond, Viroon Kamchoom and Lapyote Prasittisopin

This paper aims to clarify the efficient process of the machine learning algorithms implemented in the ready-mix concrete (RMC) onsite. It proposes innovative machine learning…

73

Abstract

Purpose

This paper aims to clarify the efficient process of the machine learning algorithms implemented in the ready-mix concrete (RMC) onsite. It proposes innovative machine learning algorithms in terms of preciseness and computation time for the RMC strength prediction.

Design/methodology/approach

This paper presents an investigation of five different machine learning algorithms, namely, multilinear regression, support vector regression, k-nearest neighbors, extreme gradient boosting (XGBOOST) and deep neural network (DNN), that can be used to predict the 28- and 56-day compressive strengths of nine mix designs and four mixing conditions. Two algorithms were designated for fitting the actual and predicted 28- and 56-day compressive strength data. Moreover, the 28-day compressive strength data were implemented to predict 56-day compressive strength.

Findings

The efficacy of the compressive strength data was predicted by DNN and XGBOOST algorithms. The computation time of the XGBOOST algorithm was apparently faster than the DNN, offering it to be the most suitable strength prediction tool for RMC.

Research limitations/implications

Since none has been practically adopted the machine learning for strength prediction for RMC, the scope of this work focuses on the commercially available algorithms. The adoption of the modified methods to fit with the RMC data should be determined thereafter.

Practical implications

The selected algorithms offer efficient prediction for promoting sustainability to the RMC industries. The standard adopting such algorithms can be established, excluding the traditional labor testing. The manufacturers can implement research to introduce machine learning in the quality controcl process of their plants.

Originality/value

Regarding literature review, machine learning has been assessed regarding the laboratory concrete mix design and concrete performance. A study conducted based on the on-site production and prolonged mixing parameters is lacking.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

1 – 10 of 275