Search results
1 – 10 of 909Joseph F. Hair, Pratyush N. Sharma, Marko Sarstedt, Christian M. Ringle and Benjamin D. Liengaard
The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis…
Abstract
Purpose
The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis differentiated indicator weights produced by partial least squares structural equation modeling (PLS-SEM).
Design/methodology/approach
The authors rely on prior literature as well as empirical illustrations and a simulation study to assess the efficacy of equal weights estimation and the CEI.
Findings
The results show that the CEI lacks discriminatory power, and its use can lead to major differences in structural model estimates, conceals measurement model issues and almost always leads to inferior out-of-sample predictive accuracy compared to differentiated weights produced by PLS-SEM.
Research limitations/implications
In light of its manifold conceptual and empirical limitations, the authors advise against the use of the CEI. Its adoption and the routine use of equal weights estimation could adversely affect the validity of measurement and structural model results and understate structural model predictive accuracy. Although this study shows that the CEI is an unsuitable metric to decide between equal weights and differentiated weights, it does not propose another means for such a comparison.
Practical implications
The results suggest that researchers and practitioners should prefer differentiated indicator weights such as those produced by PLS-SEM over equal weights.
Originality/value
To the best of the authors’ knowledge, this study is the first to provide a comprehensive assessment of the CEI’s usefulness. The results provide guidance for researchers considering using equal indicator weights instead of PLS-SEM-based weighted indicators.
Details
Keywords
Boyi Li, Miao Tian, Xiaohan Liu, Jun Li, Yun Su and Jiaming Ni
The purpose of this study is to predict the thermal protective performance (TPP) of flame-retardant fabric more economically using machine learning and analyze the factors…
Abstract
Purpose
The purpose of this study is to predict the thermal protective performance (TPP) of flame-retardant fabric more economically using machine learning and analyze the factors affecting the TPP using model visualization.
Design/methodology/approach
A total of 13 machine learning models were trained by collecting 414 datasets of typical flame-retardant fabric from current literature. The optimal performance model was used for feature importance ranking and correlation variable analysis through model visualization.
Findings
Five models with better performance were screened, all of which showed R2 greater than 0.96 and root mean squared error less than 3.0. Heat map results revealed that the TPP of fabrics differed significantly under different types of thermal exposure. The effect of fabric weight was more apparent in the flame or low thermal radiation environment. The increase in fabric weight, fabric thickness, air gap width and relative humidity of the air gap improved the TPP of the fabric.
Practical implications
The findings suggested that the visual analysis method of machine learning can intuitively understand the change trend and range of second-degree burn time under the influence of multiple variables. The established models can be used to predict the TPP of fabrics, providing a reference for researchers to carry out relevant research.
Originality/value
The findings of this study contribute directional insights for optimizing the structure of thermal protective clothing, and introduce innovative perspectives and methodologies for advancing heat transfer modeling in thermal protective clothing.
Details
Keywords
Jianping Zhang, Leilei Wang and Guodong Wang
With the rapid advancement in the automotive industry, the friction coefficient (FC), wear rate (WR) and weight loss (WL) have emerged as crucial parameters to measure the…
Abstract
Purpose
With the rapid advancement in the automotive industry, the friction coefficient (FC), wear rate (WR) and weight loss (WL) have emerged as crucial parameters to measure the performance of automotive braking systems, so the FC, WR and WL of friction material are predicted and analyzed in this work, with an aim of achieving accurate prediction of friction material properties.
Design/methodology/approach
Genetic algorithm support vector machine (GA-SVM) model is obtained by applying GA to optimize the SVM in this work, thus establishing a prediction model for friction material properties and achieving the predictive and comparative analysis of friction material properties. The process parameters are analyzed by using response surface methodology (RSM) and GA-RSM to determine them for optimal friction performance.
Findings
The results indicate that the GA-SVM prediction model has the smallest error for FC, WR and WL, showing that it owns excellent prediction accuracy. The predicted values obtained by response surface analysis are closed to those of GA-SVM model, providing further evidence of the validity and the rationality of the established prediction model.
Originality/value
The relevant results can serve as a valuable theoretical foundation for the preparation of friction material in engineering practice.
Details
Keywords
Shaghayegh Abolmakarem, Farshid Abdi, Kaveh Khalili-Damghani and Hosein Didehkhani
This paper aims to propose an improved version of portfolio optimization model through the prediction of the future behavior of stock returns using a combined wavelet-based long…
Abstract
Purpose
This paper aims to propose an improved version of portfolio optimization model through the prediction of the future behavior of stock returns using a combined wavelet-based long short-term memory (LSTM).
Design/methodology/approach
First, data are gathered and divided into two parts, namely, “past data” and “real data.” In the second stage, the wavelet transform is proposed to decompose the stock closing price time series into a set of coefficients. The derived coefficients are taken as an input to the LSTM model to predict the stock closing price time series and the “future data” is created. In the third stage, the mean-variance portfolio optimization problem (MVPOP) has iteratively been run using the “past,” “future” and “real” data sets. The epsilon-constraint method is adapted to generate the Pareto front for all three runes of MVPOP.
Findings
The real daily stock closing price time series of six stocks from the FTSE 100 between January 1, 2000, and December 30, 2020, is used to check the applicability and efficacy of the proposed approach. The comparisons of “future,” “past” and “real” Pareto fronts showed that the “future” Pareto front is closer to the “real” Pareto front. This demonstrates the efficacy and applicability of proposed approach.
Originality/value
Most of the classic Markowitz-based portfolio optimization models used past information to estimate the associated parameters of the stocks. This study revealed that the prediction of the future behavior of stock returns using a combined wavelet-based LSTM improved the performance of the portfolio.
Details
Keywords
Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt
The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…
Abstract
The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.
Details
Keywords
Gerry Yemen and Manel Baucells
The case evolves around the Powerball lottery and the rule changes implemented in 2015, which, among other things, changed the chances of winning the jackpot from 1 in 175 million…
Abstract
The case evolves around the Powerball lottery and the rule changes implemented in 2015, which, among other things, changed the chances of winning the jackpot from 1 in 175 million to 1 in 292 million. What is the impact of such rules on lottery revenues? The expected value rule is unable to explain why people play in the first place and fails to give the appropriate weight to the factors that explain the attractiveness of a lottery. This case is ideal to introduce the notion of decision weights as put forward by Kahneman and Tversky's prospect theory. By calculating decision weights, we obtain a reasonable prediction for the willingness to pay for the lottery as a function of different jackpot amounts. Using past data, we can correlate lottery revenues with predicted willingness to pay for a ticket. Quantitative-inclined audiences can then develop a simulation model of how likely it is that the jackpot grows, which, coupled with the prediction of revenues as a function of the jackpot, would give the evolution of the revenues under the new rule. The accompanying spreadsheet provides data for students to work out various scenarios to narrow objectives and maximize revenue from Powerball tickets.
Details
Keywords
Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…
Abstract
Purpose
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.
Design/methodology/approach
In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.
Findings
The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.
Originality/value
The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.
Details
Keywords
Flavian Emmanuel Sapnken, Mohammed Hamaidi, Mohammad M. Hamed, Abdelhamid Issa Hassane and Jean Gaston Tamba
For some years now, Cameroon has seen a significant increase in its electricity demand, and this need is bound to grow within the next few years owing to the current economic…
Abstract
Purpose
For some years now, Cameroon has seen a significant increase in its electricity demand, and this need is bound to grow within the next few years owing to the current economic growth and the ambitious projects underway. Therefore, one of the state's priorities is the mastery of electricity demand. In order to get there, it would be helpful to have reliable forecasting tools. This study proposes a novel version of the discrete grey multivariate convolution model (ODGMC(1,N)).
Design/methodology/approach
Specifically, a linear corrective term is added to its structure, parameterisation is done in a way that is consistent to the modelling procedure and the cumulated forecasting function of ODGMC(1,N) is obtained through an iterative technique.
Findings
Results show that ODGMC(1,N) is more stable and can extract the relationships between the system's input variables. To demonstrate and validate the superiority of ODGMC(1,N), a practical example drawn from the projection of electricity demand in Cameroon till 2030 is used. The findings reveal that the proposed model has a higher prediction precision, with 1.74% mean absolute percentage error and 132.16 root mean square error.
Originality/value
These interesting results are due to (1) the stability of ODGMC(1,N) resulting from a good adequacy between parameters estimation and their implementation, (2) the addition of a term that takes into account the linear impact of time t on the model's performance and (3) the removal of irrelevant information from input data by wavelet transform filtration. Thus, the suggested ODGMC is a robust predictive and monitoring tool for tracking the evolution of electricity needs.
Details
Keywords
Obrain Tinashe Murire, Liezel Cilliers and Willie Chinyamurindi
This study examined the influence of social media use on graduateness and the employability of exit students in South Africa.
Abstract
Purpose
This study examined the influence of social media use on graduateness and the employability of exit students in South Africa.
Design/methodology/approach
The study used quantitative and descriptive research designs to test the proposed hypotheses. An online survey was used to collect the data from a study sample. A sample of 411 respondents was received, with structural equation modelling (SEM) being used to assess the model fit.
Findings
The study found that the direct effect of social media use on graduateness skills is significant. Secondly, the direct effect of graduateness skills on perceived employability is also significant. The results also showed existence of support for the mediation of graduateness skills on the relationship between social media use and perceived employability.
Research limitations/implications
The study provides empirical evidence to the proposed model and infers the potential role of social media in addressing issues related to graduateness and the employability of exit students.
Practical implications
In addressing the challenge of unemployment, the use of social media can potentially aid in matters of skills acquisition.
Originality/value
The results demonstrate how technology through the use of social media potentially fits within enhancing graduateness and employability skills.
Details
Keywords
Wenzhen Yang, Shuo Shan, Mengting Jin, Yu Liu, Yang Zhang and Dongya Li
This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.
Abstract
Purpose
This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.
Design/methodology/approach
The proposed in-situ quality inspection system consists of an injection machine, USB camera, programmable logic controller and personal computer, interconnected via OPC or USB communication interfaces. This configuration enables seamless automation of the IM process, real-time quality inspection and automated decision-making. In addition, a MobileNet-based deep learning (DL) model is proposed for quality inspection of injection parts, fine-tuned using the TL approach.
Findings
Using the TL approach, the MobileNet-based DL model demonstrates exceptional performance, achieving validation accuracy of 99.1% with the utilization of merely 50 images per category. Its detection speed and accuracy surpass those of DenseNet121-based, VGG16-based, ResNet50-based and Xception-based convolutional neural networks. Further evaluation using a random data set of 120 images, as assessed through the confusion matrix, attests to an accuracy rate of 96.67%.
Originality/value
The proposed MobileNet-based DL model achieves higher accuracy with less resource consumption using the TL approach. It is integrated with automation technologies to build the in-situ quality inspection system of injection parts, which improves the cost-efficiency by facilitating the acquisition and labeling of task-specific images, enabling automatic defect detection and decision-making online, thus holding profound significance for the IM industry and its pursuit of enhanced quality inspection measures.
Details