Search results
1 – 10 of 622Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu
This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…
Abstract
Purpose
This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.
Design/methodology/approach
The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.
Findings
The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.
Originality/value
The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.
Details
Keywords
Sudhaman Parthasarathy and S.T. Padmapriya
Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias…
Abstract
Purpose
Algorithm bias refers to repetitive computer program errors that give some users more weight than others. The aim of this article is to provide a deeper insight of algorithm bias in AI-enabled ERP software customization. Although algorithmic bias in machine learning models has uneven, unfair and unjust impacts, research on it is mostly anecdotal and scattered.
Design/methodology/approach
As guided by the previous research (Akter et al., 2022), this study presents the possible design bias (model, data and method) one may experience with enterprise resource planning (ERP) software customization algorithm. This study then presents the artificial intelligence (AI) version of ERP customization algorithm using k-nearest neighbours algorithm.
Findings
This study illustrates the possible bias when the prioritized requirements customization estimation (PRCE) algorithm available in the ERP literature is executed without any AI. Then, the authors present their newly developed AI version of the PRCE algorithm that uses ML techniques. The authors then discuss its adjoining algorithmic bias with an illustration. Further, the authors also draw a roadmap for managing algorithmic bias during ERP customization in practice.
Originality/value
To the best of the authors’ knowledge, no prior research has attempted to understand the algorithmic bias that occurs during the execution of the ERP customization algorithm (with or without AI).
Details
Keywords
Magdalena Saldana-Perez, Giovanni Guzmán, Carolina Palma-Preciado, Amadeo Argüelles-Cruz and Marco Moreno-Ibarra
Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the…
Abstract
Purpose
Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the United Nations, only a few cities have been planned taking into account the climate changes indices. This paper aims to study climatic variations, how climate conditions might change in the future and how these changes will affect the activities and living conditions in cities, specifically focusing on Mexico city.
Design/methodology/approach
In this approach, two distinct machine learning regression models, k-Nearest Neighbors and Support Vector Regression, were used to predict variations in climate change indices within select urban areas of Mexico city. The calculated indices are based on maximum, minimum and average temperature data collected from the National Water Commission in Mexico and the Scientific Research Center of Ensenada. The methodology involves pre-processing temperature data to create a training data set for regression algorithms. It then computes predictions for each temperature parameter and ultimately assesses the performance of these algorithms based on precision metrics scores.
Findings
This paper combines a geospatial perspective with computational tools and machine learning algorithms. Among the two regression algorithms used, it was observed that k-Nearest Neighbors produced superior results, achieving an R2 score of 0.99, in contrast to Support Vector Regression, which yielded an R2 score of 0.74.
Originality/value
The full potential of machine learning algorithms has not been fully harnessed for predicting climate indices. This paper also identifies the strengths and weaknesses of each algorithm and how the generated estimations can then be considered in the decision-making process.
Details
Keywords
Shahin Alipour Bonab, Alireza Sadeghi and Mohammad Yazdani-Asrami
The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are…
Abstract
Purpose
The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are used to dampen the electric field imposed on the insulator. The purpose of this study is to present a fast and intelligent surrogate model for determination of the electric field imposed on the surface of a 120 kV composite insulator, in presence of the Corona ring.
Design/methodology/approach
Usually, the structural design parameters of the Corona ring are selected through an optimization procedure combined with some numerical simulations such as finite element method (FEM). These methods are slow and computationally expensive and thus, extremely reducing the speed of optimization problems. In this paper, a novel surrogate model was proposed that could calculate the maximum electric field imposed on a ceramic insulator in a 120 kV line. The surrogate model was created based on the different scenarios of height, radius and inner radius of the Corona ring, as the inputs of the model, while the maximum electric field on the body of the insulator was considered as the output.
Findings
The proposed model was based on artificial intelligence techniques that have high accuracy and low computational time. Three methods were used here to develop the AI-based surrogate model, namely, Cascade forward neural network (CFNN), support vector regression and K-nearest neighbors regression. The results indicated that the CFNN has the highest accuracy among these methods with 99.81% R-squared and only 0.045468 root mean squared error while the testing time is less than 10 ms.
Originality/value
To the best of the authors’ knowledge, for the first time, a surrogate method is proposed for the prediction of the maximum electric field imposed on the high voltage insulators in the presence Corona ring which is faster than any conventional finite element method.
Details
Keywords
Yumeng Feng, Weisong Mu, Yue Li, Tianqi Liu and Jianying Feng
For a better understanding of the preferences and differences of young consumers in emerging wine markets, this study aims to propose a clustering method to segment the super-new…
Abstract
Purpose
For a better understanding of the preferences and differences of young consumers in emerging wine markets, this study aims to propose a clustering method to segment the super-new generation wine consumers based on their sensitivity to wine brand, origin and price and then conduct user profiles for segmented consumer groups from the perspectives of demographic attributes, eating habits and wine sensory attribute preferences.
Design/methodology/approach
We first proposed a consumer clustering perspective based on their sensitivity to wine brand, origin and price and then conducted an adaptive density peak and label propagation layer-by-layer (ADPLP) clustering algorithm to segment consumers, which improved the issues of wrong centers' selection and inaccurate classification of remaining sample points for traditional DPC (DPeak clustering algorithm). Then, we built a consumer profile system from the perspectives of demographic attributes, eating habits and wine sensory attribute preferences for segmented consumer groups.
Findings
In this study, 10 typical public datasets and 6 basic test algorithms are used to evaluate the proposed method, and the results showed that the ADPLP algorithm was optimal or suboptimal on 10 datasets with accuracy above 0.78. The average improvement in accuracy over the base DPC algorithm is 0.184. As an outcome of the wine consumer profiles, sensitive consumers prefer wines with medium prices of 100–400 CNY and more personalized brands and origins, while casual consumers are fond of popular brands, popular origins and low prices within 50 CNY. The wine sensory attributes preferred by super-new generation consumers are red, semi-dry, semi-sweet, still, fresh tasting, fruity, floral and low acid.
Practical implications
Young Chinese consumers are the main driver of wine consumption in the future. This paper provides a tool for decision-makers and marketers to identify the preferences of young consumers quickly which is meaningful and helpful for wine marketing.
Originality/value
In this study, the ADPLP algorithm was introduced for the first time. Subsequently, the user profile label system was constructed for segmented consumers to highlight their characteristics and demand partiality from three aspects: demographic characteristics, consumers' eating habits and consumers' preferences for wine attributes. Moreover, the ADPLP algorithm can be considered for user profiles on other alcoholic products.
Details
Keywords
Aleena Swetapadma, Tishya Manna and Maryam Samami
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the…
Abstract
Purpose
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the atrial blood pressure, photoplethysmogram (PLETH), electrocardiogram (ECG) and respiratory (RESP) signals are considered as input signals.
Design/methodology/approach
Three machine learning approaches feed-forward artificial neural network (ANN), ensemble learning method and k-nearest neighbors searching methods are used to detect the false alarm. The proposed method has been implemented using Arduino and MATLAB/SIMULINK for real-time ICU-arrhythmia patients' monitoring data.
Findings
The proposed method detects the false alarm with an accuracy of 99.4 per cent during asystole, 100 per cent during ventricular flutter, 98.5 per cent during ventricular tachycardia, 99.6 per cent during bradycardia and 100 per cent during tachycardia. The proposed framework is adaptive in many scenarios, easy to implement, computationally friendly and highly accurate and robust with overfitting issue.
Originality/value
As ECG signals consisting with PQRST wave, any deviation from the normal pattern may signify some alarming conditions. These deviations can be utilized as input to classifiers for the detection of false alarms; hence, there is no need for other feature extraction techniques. Feed-forward ANN with the Lavenberg–Marquardt algorithm has shown higher rate of convergence than other neural network algorithms which helps provide better accuracy with no overfitting.
Details
Keywords
Shrutika Sharma, Vishal Gupta, Deepa Mudgal and Vishal Srivastava
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to…
Abstract
Purpose
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to experiment with different printing settings. The current study aims to propose a regression-based machine learning model to predict the mechanical behavior of ulna bone plates.
Design/methodology/approach
The bone plates were formed using fused deposition modeling (FDM) technique, with printing attributes being varied. The machine learning models such as linear regression, AdaBoost regression, gradient boosting regression (GBR), random forest, decision trees and k-nearest neighbors were trained for predicting tensile strength and flexural strength. Model performance was assessed using root mean square error (RMSE), coefficient of determination (R2) and mean absolute error (MAE).
Findings
Traditional experimentation with various settings is both time-consuming and expensive, emphasizing the need for alternative approaches. Among the models tested, GBR model demonstrated the best performance in predicting both tensile and flexural strength and achieved the lowest RMSE, highest R2 and lowest MAE, which are 1.4778 ± 0.4336 MPa, 0.9213 ± 0.0589 and 1.2555 ± 0.3799 MPa, respectively, and 3.0337 ± 0.3725 MPa, 0.9269 ± 0.0293 and 2.3815 ± 0.2915 MPa, respectively. The findings open up opportunities for doctors and surgeons to use GBR as a reliable tool for fabricating patient-specific bone plates, without the need for extensive trial experiments.
Research limitations/implications
The current study is limited to the usage of a few models. Other machine learning-based models can be used for prediction-based study.
Originality/value
This study uses machine learning to predict the mechanical properties of FDM-based distal ulna bone plate, replacing traditional design of experiments methods with machine learning to streamline the production of orthopedic implants. It helps medical professionals, such as physicians and surgeons, make informed decisions when fabricating customized bone plates for their patients while reducing the need for time-consuming experimentation, thereby addressing a common limitation of 3D printing medical implants.
Details
Keywords
Arthi R., Nayana J.S. and Rajarshee Mondal
The purpose of optimal protocol prediction and the benefits offered by quantum key distribution (QKD), including unbreakable security, there is a growing interest in the practical…
Abstract
Purpose
The purpose of optimal protocol prediction and the benefits offered by quantum key distribution (QKD), including unbreakable security, there is a growing interest in the practical realization of quantum communication. Realization of the optimal protocol predictor in quantum key distribution is a critical step toward commercialization of QKD.
Design/methodology/approach
The proposed work designs a machine learning model such as K-nearest neighbor algorithm, convolutional neural networks, decision tree (DT), support vector machine and random forest (RF) for optimal protocol selector for quantum key distribution network (QKDN).
Findings
Because of the effectiveness of machine learning methods in predicting effective solutions using data, these models will be the best optimal protocol selectors for achieving high efficiency for QKDN. The results show that the best machine learning method for predicting optimal protocol in QKD is the RF algorithm. It also validates the effectiveness of machine learning in optimal protocol selection.
Originality/value
The proposed work was done using algorithms like the local search algorithm or exhaustive traversal, however the major downside of using these algorithms is that it takes a very long time to revert back results, which is unacceptable for commercial systems. Hence, machine learning methods are proposed to see the effectiveness of prediction for achieving high efficiency.
Details
Keywords
Tulsi Pawan Fowdur and Lavesh Babooram
The purpose of this paper is geared towards the capture and analysis of network traffic using an array ofmachine learning (ML) and deep learning (DL) techniques to classify…
Abstract
Purpose
The purpose of this paper is geared towards the capture and analysis of network traffic using an array ofmachine learning (ML) and deep learning (DL) techniques to classify network traffic into different classes and predict network traffic parameters.
Design/methodology/approach
The classifier models include k-nearest neighbour (KNN), multilayer perceptron (MLP) and support vector machine (SVM), while the regression models studied are multiple linear regression (MLR) as well as MLP. The analytics were performed on both a local server and a servlet hosted on the international business machines cloud. Moreover, the local server could aggregate data from multiple devices on the network and perform collaborative ML to predict network parameters. With optimised hyperparameters, analytical models were incorporated in the cloud hosted Java servlets that operate on a client–server basis where the back-end communicates with Cloudant databases.
Findings
Regarding classification, it was found that KNN performs significantly better than MLP and SVM with a comparative precision gain of approximately 7%, when classifying both Wi-Fi and long term evolution (LTE) traffic.
Originality/value
Collaborative regression models using traffic collected from two devices were experimented and resulted in an increased average accuracy of 0.50% for all variables, with a multivariate MLP model.
Details
Keywords
Jianhua Zhang, Liangchen Li, Fredrick Ahenkora Boamah, Dandan Wen, Jiake Li and Dandan Guo
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of…
Abstract
Purpose
Traditional case-adaptation methods have poor accuracy, low efficiency and limited applicability, which cannot meet the needs of knowledge users. To address the shortcomings of the existing research in the industry, this paper proposes a case-adaptation optimization algorithm to support the effective application of tacit knowledge resources.
Design/methodology/approach
The attribute simplification algorithm based on the forward search strategy in the neighborhood decision information system is implemented to realize the vertical dimensionality reduction of the case base, and the fuzzy C-mean (FCM) clustering algorithm based on the simulated annealing genetic algorithm (SAGA) is implemented to compress the case base horizontally with multiple decision classes. Then, the subspace K-nearest neighbors (KNN) algorithm is used to induce the decision rules for the set of adapted cases to complete the optimization of the adaptation model.
Findings
The findings suggest the rapid enrichment of data, information and tacit knowledge in the field of practice has led to low efficiency and low utilization of knowledge dissemination, and this algorithm can effectively alleviate the problems of users falling into “knowledge disorientation” in the era of the knowledge economy.
Practical implications
This study provides a model with case knowledge that meets users’ needs, thereby effectively improving the application of the tacit knowledge in the explicit case base and the problem-solving efficiency of knowledge users.
Social implications
The adaptation model can serve as a stable and efficient prediction model to make predictions for the effects of the many logistics and e-commerce enterprises' plans.
Originality/value
This study designs a multi-decision class case-adaptation optimization study based on forward attribute selection strategy-neighborhood rough sets (FASS-NRS) and simulated annealing genetic algorithm-fuzzy C-means (SAGA-FCM) for tacit knowledgeable exogenous cases. By effectively organizing and adjusting tacit knowledge resources, knowledge service organizations can maintain their competitive advantages. The algorithm models established in this study develop theoretical directions for a multi-decision class case-adaptation optimization study of tacit knowledge.
Details