Search results
1 – 10 of over 1000Biplab Bhattacharjee, Kavya Unni and Maheshwar Pratap
Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This…
Abstract
Purpose
Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This study aims to evaluate different genres of classifiers for product return chance prediction, and further optimizes the best performing model.
Design/methodology/approach
An e-commerce data set having categorical type attributes has been used for this study. Feature selection based on chi-square provides a selective features-set which is used as inputs for model building. Predictive models are attempted using individual classifiers, ensemble models and deep neural networks. For performance evaluation, 75:25 train/test split and 10-fold cross-validation strategies are used. To improve the predictability of the best performing classifier, hyperparameter tuning is performed using different optimization methods such as, random search, grid search, Bayesian approach and evolutionary models (genetic algorithm, differential evolution and particle swarm optimization).
Findings
A comparison of F1-scores revealed that the Bayesian approach outperformed all other optimization approaches in terms of accuracy. The predictability of the Bayesian-optimized model is further compared with that of other classifiers using experimental analysis. The Bayesian-optimized XGBoost model possessed superior performance, with accuracies of 77.80% and 70.35% for holdout and 10-fold cross-validation methods, respectively.
Research limitations/implications
Given the anonymized data, the effects of individual attributes on outcomes could not be investigated in detail. The Bayesian-optimized predictive model may be used in decision support systems, enabling real-time prediction of returns and the implementation of preventive measures.
Originality/value
There are very few reported studies on predicting the chance of order return in e-businesses. To the best of the authors’ knowledge, this study is the first to compare different optimization methods and classifiers, demonstrating the superiority of the Bayesian-optimized XGBoost classification model for returns prediction.
Details
Keywords
Claire K. Wan and Mingchang Chih
We argue that a fundamental issue regarding how to search and how to switch between different cognitive modes lies in the decision rules that influence the dynamics of learning…
Abstract
Purpose
We argue that a fundamental issue regarding how to search and how to switch between different cognitive modes lies in the decision rules that influence the dynamics of learning and exploration. We examine the search logics underlying these decision rules and propose conceptual prompts that can be applied mentally or computationally to aid managers’ decision-making.
Design/methodology/approach
By applying Multi-Armed Bandit (MAB) modeling to simulate agents’ interaction with dynamic environments, we compared the patterns and performance of selected MAB algorithms under different configurations of environmental conditions.
Findings
We develop three conceptual prompts. First, the simple heuristic-based exploration strategy works well in conditions of low environmental variability and few alternatives. Second, an exploration strategy that combines simple and de-biasing heuristics is suitable for most dynamic and complex decision environments. Third, the uncertainty-based exploration strategy is more applicable in the condition of high environmental unpredictability as it can more effectively recognize deviated patterns.
Research limitations/implications
This study contributes to emerging research on using algorithms to develop novel concepts and combining heuristics and algorithmic intelligence in strategic decision-making.
Practical implications
This study offers insights that there are different possibilities for exploration strategies for managers to apply conceptually and that the adaptability of cognitive-distant search may be underestimated in turbulent environments.
Originality/value
Drawing on insights from machine learning and cognitive psychology research, we demonstrate the fitness of different exploration strategies in different dynamic environmental configurations by comparing the different search logics that underlie the three MAB algorithms.
Details
Keywords
Shikha Pandey, Yogesh Iyer Murthy and Sumit Gandhi
This study aims to assess support vector machine (SVM) models' predictive ability to estimate half-cell potential (HCP) values from input parameters by using Bayesian…
Abstract
Purpose
This study aims to assess support vector machine (SVM) models' predictive ability to estimate half-cell potential (HCP) values from input parameters by using Bayesian optimization, grid search and random search.
Design/methodology/approach
A data set with 1,134 rows and 6 columns is used for principal component analysis (PCA) to minimize dimensionality and preserve 95% of explained variance. HCP is output from temperature, age, relative humidity, X and Y lengths. Root mean square error (RMSE), R-squared, mean squared error (MSE), mean absolute error, prediction speed and training time are used to measure model effectiveness. SHAPLEY analysis is also executed.
Findings
The study reveals variations in predictive performance across different optimization methods, with RMSE values ranging from 18.365 to 30.205 and R-squared values spanning from 0.88 to 0.96. Additionally, differences in training times, prediction speeds and model complexities are observed, highlighting the trade-offs between model accuracy and computational efficiency.
Originality/value
This study contributes to the understanding of SVM model efficacy in HCP prediction, emphasizing the importance of optimization techniques, model complexity and dimensionality reduction methods such as PCA.
Details
Keywords
Pratheek Suresh and Balaji Chakravarthy
As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…
Abstract
Purpose
As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.
Design/methodology/approach
This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.
Findings
The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.
Research limitations/implications
The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.
Originality/value
The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.
Details
Keywords
Armindo Lobo, Paulo Sampaio and Paulo Novais
This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0…
Abstract
Purpose
This study proposes a machine learning framework to predict customer complaints from production line tests in an automotive company's lot-release process, enhancing Quality 4.0. It aims to design and implement the framework, compare different machine learning (ML) models and evaluate a non-sampling threshold-moving approach for adjusting prediction capabilities based on product requirements.
Design/methodology/approach
This study applies the Cross-Industry Standard Process for Data Mining (CRISP-DM) and four ML models to predict customer complaints from automotive production tests. It employs cost-sensitive and threshold-moving techniques to address data imbalance, with the F1-Score and Matthews correlation coefficient assessing model performance.
Findings
The framework effectively predicts customer complaint-related tests. XGBoost outperformed the other models with an F1-Score of 72.4% and a Matthews correlation coefficient of 75%. It improves the lot-release process and cost efficiency over heuristic methods.
Practical implications
The framework has been tested on real-world data and shows promising results in improving lot-release decisions and reducing complaints and costs. It enables companies to adjust predictive models by changing only the threshold, eliminating the need for retraining.
Originality/value
To the best of our knowledge, there is limited literature on using ML to predict customer complaints for the lot-release process in an automotive company. Our proposed framework integrates ML with a non-sampling approach, demonstrating its effectiveness in predicting complaints and reducing costs, fostering Quality 4.0.
Details
Keywords
Cheong Kim, Jungwoo Lee and Kun Chang Lee
The main objective of this study is to determine the factors that have the greatest impact on travelers' opinions of airports.
Abstract
Purpose
The main objective of this study is to determine the factors that have the greatest impact on travelers' opinions of airports.
Design/methodology/approach
11,656 customer reviews for 649 airports around the world were gathered following the COVID-19 outbreak from the website that rates airport quality. The dataset was examined using hierarchical regression, PLS-SEM, and the unsupervised Bayesian algorithm-based PSEM in order to verify the hypothesis.
Findings
The results showed that people’s intentions to recommend airports are significantly influenced by their opinions of how well the servicescape, staff, and services are.
Practical implications
By encouraging air travelers to have positive intentions toward recommending the airports, this research offers airport managers decision-support implications for how to improve airport service quality. This will increase the likelihood of retaining more passengers.
Originality/value
This study also suggests a quick-to-implement visual decision-making mechanism based on PSEM that is simple to understand.
Details
Keywords
Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…
Abstract
Purpose
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.
Design/methodology/approach
A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.
Findings
The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.
Practical implications
This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.
Originality/value
This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.
Details
Keywords
Minghao Wang, Ming Cong, Yu Du, Huageng Zhong and Dong Liu
To make the robot that have real autonomous ability is always the goal of mobile robot research. For mobile robots, simultaneous localization and mapping (SLAM) research is no…
Abstract
Purpose
To make the robot that have real autonomous ability is always the goal of mobile robot research. For mobile robots, simultaneous localization and mapping (SLAM) research is no longer satisfied with enabling robots to build maps by remote control, more needs will focus on the autonomous exploration of unknown areas, which refer to the low light, complex spatial features and a series of unstructured environment, lick underground special space (dark and multiintersection). This study aims to propose a novel robot structure with mapping and autonomous exploration algorithms. The experiment proves the detection ability of the robot.
Design/methodology/approach
A small bio-inspired mobile robot suitable for underground special space (dark and multiintersection) is designed, and the control system is set up based on STM32 and Jetson Nano. The robot is equipped with double laser sensor and Ackerman chassis structure, which can adapt to the practical requirements of exploration in underground special space. Based on the graph optimization SLAM method, an optimization method for map construction is proposed. The Iterative Closest Point (ICP) algorithm is used to match two frames of laser to recalculate the relative pose of the robot, which improves the sensor utilization rate of the robot in underground space and also increase the synchronous positioning accuracy. Moreover, based on boundary cells and rapidly-exploring random tree (RRT) algorithm, a new Bio-RRT method for robot autonomous exploration is proposed in addition.
Findings
According to the experimental results, it can be seen that the upgraded SLAM method proposed in this paper achieves better results in map construction. At the same time, the algorithm presents good real-time performance as well as high accuracy and strong maintainability, particularly it can update the map continuously with the passing of time and ensure the positioning accuracy in the process of map updating. The Bio-RRT method fused with the firing excitation mechanism of boundary cells has a more purposeful random tree growth. The number of random tree expansion nodes is less, and the amount of information to be processed is reduced, which leads to the path planning time shorter and the efficiency higher. In addition, the target bias makes the random tree grow directly toward the target point with a certain probability, and the obtained path nodes are basically distributed on or on both sides of the line between the initial point and the target point, which makes the path length shorter and reduces the moving cost of the mobile robot. The final experimental results demonstrate that the proposed upgraded SLAM and Bio-RRT methods can better complete the underground special space exploration task.
Originality/value
Based on the background of robot autonomous exploration in underground special space, a new bio-inspired mobile robot structure with mapping and autonomous exploration algorithm is proposed in this paper. The robot structure is constructed, and the perceptual unit, control unit, driving unit and communication unit are described in detail. The robot can satisfy the practical requirements of exploring the underground dark and multiintersection space. Then, the upgraded graph optimization laser SLAM algorithm and interframe matching optimization method are proposed in this paper. The Bio-RRT independent exploration method is finally proposed, which takes shorter time in equally open space and the search strategy for multiintersection space is more efficient. The experimental results demonstrate that the proposed upgrade SLAM and Bio-RRT methods can better complete the underground space exploration task.
Details
Keywords
James Christopher Westland and Jian Mou
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete…
Abstract
Purpose
Internet search is a $120bn business that answers lists of search terms or keywords with relevant links to Internet webpages. Only a few companies have sufficient scale to compete and thus economics of the process are paramount. This study aims to develop a detailed industry-specific modeling of the economics of internet search.
Design/methodology/approach
The current research develops a stochastic model of the process of Internet indexing, search and retrieval in order to predict expected costs and revenues of particular configurations and usages.
Findings
The models define behavior and economics of parameters that are not directly observable, where it is difficult to empirically determine the distributions and economics.
Originality/value
The model may be used to guide the economics of large search engine operations, including the advertising platforms that depend on them and largely fund them.
Details
Keywords
Zijing Ye, Huan Li and Wenhong Wei
Path planning is an important part of UAV mission planning. The main purpose of this paper is to overcome the shortcomings of the standard particle swarm optimization (PSO) such…
Abstract
Purpose
Path planning is an important part of UAV mission planning. The main purpose of this paper is to overcome the shortcomings of the standard particle swarm optimization (PSO) such as easy to fall into the local optimum, so that the improved PSO applied to the UAV path planning can enable the UAV to plan a better quality path.
Design/methodology/approach
Firstly, the adaptation function is formulated by comprehensively considering the performance constraints of the flight target as well as the UAV itself. Secondly, the standard PSO is improved, and the improved particle swarm optimization with multi-strategy fusion (MFIPSO) is proposed. The method introduces class sigmoid inertia weight, adaptively adjusts the learning factors and at the same time incorporates K-means clustering ideas and introduces the Cauchy perturbation factor. Finally, MFIPSO is applied to UAV path planning.
Findings
Simulation experiments are conducted in simple and complex scenarios, respectively, and the quality of the path is measured by the fitness value and straight line rate, and the experimental results show that MFIPSO enables the UAV to plan a path with better quality.
Originality/value
Aiming at the standard PSO is prone to problems such as premature convergence, MFIPSO is proposed, which introduces class sigmoid inertia weight and adaptively adjusts the learning factor, balancing the global search ability and local convergence ability of the algorithm. The idea of K-means clustering algorithm is also incorporated to reduce the complexity of the algorithm while maintaining the diversity of particle swarm. In addition, the Cauchy perturbation is used to avoid the algorithm from falling into local optimum. Finally, the adaptability function is formulated by comprehensively considering the performance constraints of the flight target as well as the UAV itself, which improves the accuracy of the evaluation model.
Details