Search results
1 – 10 of over 2000Chon Van Le and Uyen Hoang Pham
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the…
Abstract
Purpose
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the basis and rationale for statistics in Wasserstein space, where the metric on probability measures is taken as a Wasserstein metric arising from optimal transport theory.
Design/methodology/approach
The authors spell out the basis and rationale for using Wasserstein metrics on the data space of (random) probability measures.
Findings
In elaborating the new statistical analysis of non-Euclidean data sets, the paper illustrates the generalization of traditional aspects of statistical inference following Frechet's program.
Originality/value
Besides the elaboration of research methodology for a new data analysis, the paper discusses the applications of Wasserstein metrics to the robustness of financial risk measures.
Details
Keywords
Wenzhen Yang, Shuo Shan, Mengting Jin, Yu Liu, Yang Zhang and Dongya Li
This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.
Abstract
Purpose
This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.
Design/methodology/approach
The proposed in-situ quality inspection system consists of an injection machine, USB camera, programmable logic controller and personal computer, interconnected via OPC or USB communication interfaces. This configuration enables seamless automation of the IM process, real-time quality inspection and automated decision-making. In addition, a MobileNet-based deep learning (DL) model is proposed for quality inspection of injection parts, fine-tuned using the TL approach.
Findings
Using the TL approach, the MobileNet-based DL model demonstrates exceptional performance, achieving validation accuracy of 99.1% with the utilization of merely 50 images per category. Its detection speed and accuracy surpass those of DenseNet121-based, VGG16-based, ResNet50-based and Xception-based convolutional neural networks. Further evaluation using a random data set of 120 images, as assessed through the confusion matrix, attests to an accuracy rate of 96.67%.
Originality/value
The proposed MobileNet-based DL model achieves higher accuracy with less resource consumption using the TL approach. It is integrated with automation technologies to build the in-situ quality inspection system of injection parts, which improves the cost-efficiency by facilitating the acquisition and labeling of task-specific images, enabling automatic defect detection and decision-making online, thus holding profound significance for the IM industry and its pursuit of enhanced quality inspection measures.
Details
Keywords
Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye
Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…
Abstract
Purpose
Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.
Design/methodology/approach
To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.
Findings
The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.
Research limitations/implications
This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.
Practical implications
This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.
Originality/value
The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.
Details
Keywords
H.G. Di, Pingbao Xu, Quanmei Gong, Huiji Guo and Guangbei Su
This study establishes a method for predicting ground vibrations caused by railway tunnels in unsaturated soils with spatial variability.
Abstract
Purpose
This study establishes a method for predicting ground vibrations caused by railway tunnels in unsaturated soils with spatial variability.
Design/methodology/approach
First, an improved 2.5D finite-element-method-perfect-matching-layer (FEM-PML) model is proposed. The Galerkin method is used to derive the finite element expression in the ub-pl-pg format for unsaturated soil. Unlike the ub-v-w format, which has nine degrees of freedom per node, the ub-pl-pg format has only five degrees of freedom per node; this significantly enhances the calculation efficiency. The stretching function of the PML is adopted to handle the unlimited boundary domain. Additionally, the 2.5D FEM-PML model couples the tunnel, vehicle and track structures. Next, the spatial variability of the soil parameters is simulated by random fields using the Monte Carlo method. By incorporating random fields of soil parameters into the 2.5D FEM-PML model, the effect of soil spatial variability on ground vibrations is demonstrated using a case study.
Findings
The spatial variability of the soil parameters primarily affected the vibration acceleration amplitude but had a minor effect on its spatial distribution and attenuation over time. In addition, ground vibration acceleration was more affected by the spatial variability of the soil bulk modulus of compressibility than by that of saturation.
Originality/value
Using the 2.5D FEM-PML model in the ub-pl-pg format of unsaturated soil enhances the computational efficiency. On this basis, with the random fields established by Monte Carlo simulation, the model can calculate the reliability of soil dynamics, which was rarely considered by previous models.
Details
Keywords
Xiaohui Jia, Chunrui Tang, Xiangbo Zhang and Jinyue Liu
This study aims to propose an efficient dual-robot task collaboration strategy to address the issue of low work efficiency and inability to meet the production needs of a single…
Abstract
Purpose
This study aims to propose an efficient dual-robot task collaboration strategy to address the issue of low work efficiency and inability to meet the production needs of a single robot during construction operations.
Design/methodology/approach
A hybrid task allocation method based on integer programming and auction algorithms, with the aim of achieving a balanced workload between two robots has been proposed. In addition, while ensuring reasonable workload allocation between the two robots, an improved dual ant colony algorithm was used to solve the dual traveling salesman problem, and the global path planning of the two robots was determined, resulting in an efficient and collision-free path for the dual robots to operate. Meanwhile, an improved fast Random tree rapidly-exploring random tree algorithm is introduced as a local obstacle avoidance strategy.
Findings
The proposed method combines randomization and iteration techniques to achieve an efficient task allocation strategy for two robots, ensuring the relative optimal global path of the two robots in cooperation and solving complex local obstacle avoidance problems.
Originality/value
This method is applied to the scene of steel bar tying in construction work, with the workload allocation and collaborative work between two robots as evaluation indicators. The experimental results show that this method can efficiently complete the steel bar banding operation, effectively reduce the interference between the two robots and minimize the interference of obstacles in the environment.
Details
Keywords
Shrutika Sharma, Vishal Gupta, Deepa Mudgal and Vishal Srivastava
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to…
Abstract
Purpose
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to experiment with different printing settings. The current study aims to propose a regression-based machine learning model to predict the mechanical behavior of ulna bone plates.
Design/methodology/approach
The bone plates were formed using fused deposition modeling (FDM) technique, with printing attributes being varied. The machine learning models such as linear regression, AdaBoost regression, gradient boosting regression (GBR), random forest, decision trees and k-nearest neighbors were trained for predicting tensile strength and flexural strength. Model performance was assessed using root mean square error (RMSE), coefficient of determination (R2) and mean absolute error (MAE).
Findings
Traditional experimentation with various settings is both time-consuming and expensive, emphasizing the need for alternative approaches. Among the models tested, GBR model demonstrated the best performance in predicting both tensile and flexural strength and achieved the lowest RMSE, highest R2 and lowest MAE, which are 1.4778 ± 0.4336 MPa, 0.9213 ± 0.0589 and 1.2555 ± 0.3799 MPa, respectively, and 3.0337 ± 0.3725 MPa, 0.9269 ± 0.0293 and 2.3815 ± 0.2915 MPa, respectively. The findings open up opportunities for doctors and surgeons to use GBR as a reliable tool for fabricating patient-specific bone plates, without the need for extensive trial experiments.
Research limitations/implications
The current study is limited to the usage of a few models. Other machine learning-based models can be used for prediction-based study.
Originality/value
This study uses machine learning to predict the mechanical properties of FDM-based distal ulna bone plate, replacing traditional design of experiments methods with machine learning to streamline the production of orthopedic implants. It helps medical professionals, such as physicians and surgeons, make informed decisions when fabricating customized bone plates for their patients while reducing the need for time-consuming experimentation, thereby addressing a common limitation of 3D printing medical implants.
Details
Keywords
Mohammed Ayoub Ledhem and Warda Moussaoui
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…
Abstract
Purpose
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.
Design/methodology/approach
This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.
Findings
The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.
Practical implications
This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.
Originality/value
This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.
Details
Keywords
Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Abstract
Purpose
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Design/methodology/approach
A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.
Findings
The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.
Practical implications
The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.
Originality/value
The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?
Details
Keywords
Hossein Sohrabi and Esmatullah Noorzai
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the…
Abstract
Purpose
The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the revision step.
Design/methodology/approach
The cases were extracted by studying 68 water-related projects. This research employs earned value management (EVM) factors to consider time and cost features and economic, natural, technical, and project risks to account for uncertainties and supervised learning models to estimate cost overrun. Time-series algorithms were also used to predict construction cost indexes (CCI) and model improvements in future forecasts. Outliers were deleted by the pre-processing process. Next, datasets were split into testing and training sets, and algorithms were implemented. The accuracy of different models was measured with the mean absolute percentage error (MAPE) and the normalized root mean square error (NRSME) criteria.
Findings
The findings show an improvement in the accuracy of predictions using datasets that consider uncertainties, and ensemble algorithms such as Random Forest and AdaBoost had higher accuracy. Also, among the single algorithms, the support vector regressor (SVR) with the sigmoid kernel outperformed the others.
Originality/value
This research is the first attempt to develop a case-based reasoning model based on various risks and uncertainties. The developed model has provided an approving overlap with machine learning models to predict cost overruns. The model has been implemented in collected water-related projects and results have been reported.
Details
Keywords
Yu Wang, Daqing Zheng and Yulin Fang
The advancement of enterprise social networks (ESNs) facilitates information sharing but also presents the challenge of managing information boundaries. This study aims to explore…
Abstract
Purpose
The advancement of enterprise social networks (ESNs) facilitates information sharing but also presents the challenge of managing information boundaries. This study aims to explore the factors that influence the information-control behavior of ESN users when continuously sharing information.
Design/methodology/approach
This study specifies the information-control behaviors in the “wall posts” channel and applies communication privacy management (CPM) theory to analyze the effects of the individual-specific factor (disposition to value information), context-specific factors (work-relatedness and information richness) and risk-benefit ratio (public benefit and public risk). Data on actual information-control behaviors extracted from ESN logs are examined using multilevel mixed-effects logistic regression analysis.
Findings
The study's findings show the direct effects of the individual-specific factor, context-specific factors and risk-benefit ratio, highlighting interactions between the individual motivation factor and ESN context factors.
Originality/value
This study reshapes the relationship of CPM theory boundary rules in the ESN context, extending information-control research and providing insights into ESNs' information-control practices.
Details