Search results
1 – 10 of 320Xiaomei Liu, Bin Ma, Meina Gao and Lin Chen
A time-varying grey Fourier model (TVGFM(1,1,N)) is proposed for the simulation of variable amplitude seasonal fluctuation time series, as the performance of traditional grey…
Abstract
Purpose
A time-varying grey Fourier model (TVGFM(1,1,N)) is proposed for the simulation of variable amplitude seasonal fluctuation time series, as the performance of traditional grey models can't catch the time-varying trend well.
Design/methodology/approach
The proposed model couples Fourier series and linear time-varying terms as the grey action, to describe the characteristics of variable amplitude and seasonality. The truncated Fourier order N is preselected from the alternative order set by Nyquist-Shannon sampling theorem and the principle of simplicity, then the optimal Fourier order is determined by hold-out method to improve the robustness of the proposed model. Initial value correction and the multiple transformation are also studied to improve the precision.
Findings
The new model has a broader applicability range as a result of the new grey action, attaining higher fitting and forecasting accuracy. The numerical experiment of a generated monthly time series indicates the proposed model can accurately fit the variable amplitude seasonal sequence, in which the mean absolute percentage error (MAPE) is only 0.01%, and the complex simulations based on Monte-Carlo method testify the validity of the proposed model. The results of monthly electricity consumption in China's primary industry, demonstrate the proposed model catches the time-varying trend and has good performances, where MAPEF and MAPET are below 5%. Moreover, the proposed TVGFM(1,1,N) model is superior to the benchmark models, grey polynomial model (GMP(1,1,N)), grey Fourier model (GFM(1,1,N)), seasonal grey model (SGM(1,1)), seasonal ARIMA model seasonal autoregressive integrated moving average model (SARIMA) and support vector regression (SVR).
Originality/value
The parameter estimates and forecasting of the new proposed TVGFM are studied, and the good fitting and forecasting accuracy of time-varying amplitude seasonal fluctuation series are testified by numerical simulations and a case study.
Details
Keywords
Le Wang, Liping Zou and Ji Wu
This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.
Abstract
Purpose
This paper aims to use artificial neural network (ANN) methods to predict stock price crashes in the Chinese equity market.
Design/methodology/approach
Three ANN models are developed and compared with the logistic regression model.
Findings
Results from this study conclude that the ANN approaches outperform the traditional logistic regression model, with fewer hidden layers in the ANN model having superior performance compared to the ANNs with multiple hidden layers. Results from the ANN approach also reveal that foreign institutional ownership, financial leverage, weekly average return and market-to-book ratio are the important variables when predicting stock price crashes, consistent with results from the traditional logistic model.
Originality/value
First, the ANN framework has been used in this study to forecast the stock price crashes and compared to the traditional logistic model in the world’s largest emerging market China. Second, the receiver operating characteristics curves and the area under the ROC curve have been used to evaluate the forecasting performance between the ANNs and the traditional approaches, in addition to some traditional performance evaluation methods.
Details
Keywords
Yuhong Wang and Qi Si
This study aims to predict China's carbon emission intensity and put forward a set of policy recommendations for further development of a low-carbon economy in China.
Abstract
Purpose
This study aims to predict China's carbon emission intensity and put forward a set of policy recommendations for further development of a low-carbon economy in China.
Design/methodology/approach
In this paper, the Interaction Effect Grey Power Model of N Variables (IEGPM(1,N)) is developed, and the Dragonfly algorithm (DA) is used to select the best power index for the model. Specific model construction methods and rigorous mathematical proofs are given. In order to verify the applicability and validity, this paper compares the model with the traditional grey model and simulates the carbon emission intensity of China from 2014 to 2021. In addition, the new model is used to predict the carbon emission intensity of China from 2022 to 2025, which can provide a reference for the 14th Five-Year Plan to develop a scientific emission reduction path.
Findings
The results show that if the Chinese government does not take effective policy measures in the future, carbon emission intensity will not achieve the set goals. The IEGPM(1,N) model also provides reliable results and works well in simulation and prediction.
Originality/value
The paper considers the nonlinear and interactive effect of input variables in the system's behavior and proposes an improved grey multivariable model, which fills the gap in previous studies.
Details
Keywords
Flavian Emmanuel Sapnken, Mohammed Hamaidi, Mohammad M. Hamed, Abdelhamid Issa Hassane and Jean Gaston Tamba
For some years now, Cameroon has seen a significant increase in its electricity demand, and this need is bound to grow within the next few years owing to the current economic…
Abstract
Purpose
For some years now, Cameroon has seen a significant increase in its electricity demand, and this need is bound to grow within the next few years owing to the current economic growth and the ambitious projects underway. Therefore, one of the state's priorities is the mastery of electricity demand. In order to get there, it would be helpful to have reliable forecasting tools. This study proposes a novel version of the discrete grey multivariate convolution model (ODGMC(1,N)).
Design/methodology/approach
Specifically, a linear corrective term is added to its structure, parameterisation is done in a way that is consistent to the modelling procedure and the cumulated forecasting function of ODGMC(1,N) is obtained through an iterative technique.
Findings
Results show that ODGMC(1,N) is more stable and can extract the relationships between the system's input variables. To demonstrate and validate the superiority of ODGMC(1,N), a practical example drawn from the projection of electricity demand in Cameroon till 2030 is used. The findings reveal that the proposed model has a higher prediction precision, with 1.74% mean absolute percentage error and 132.16 root mean square error.
Originality/value
These interesting results are due to (1) the stability of ODGMC(1,N) resulting from a good adequacy between parameters estimation and their implementation, (2) the addition of a term that takes into account the linear impact of time t on the model's performance and (3) the removal of irrelevant information from input data by wavelet transform filtration. Thus, the suggested ODGMC is a robust predictive and monitoring tool for tracking the evolution of electricity needs.
Details
Keywords
Dongdong Song, Wenxiang Qin, Qian Zhou, Dong Xu and Bo Zhang
The anticorrosion coatings used in marine and atmospheric environment are subjected to many environmental factors. And the aging failure has been puzzling researchers. The purpose…
Abstract
Purpose
The anticorrosion coatings used in marine and atmospheric environment are subjected to many environmental factors. And the aging failure has been puzzling researchers. The purpose of this study is to find the correlation between the initial aging of epoxy coatings and the typical marine atmospheric environmental factors.
Design/methodology/approach
The epoxy coatings were subjected to a one-year exposure in three typical marine atmospheres. Meanwhile, principal component analysis, linear regression and Spearman and gray correlation analysis were applied to quantify the environmental characteristics and establish correlations with the coating aging.
Findings
The results indicate that the coating will undergo macroscopic fading and chalking upon exposure to the marine atmosphere, while microscopic examination reveals holes, cracks and partial peeling. The adhesion performance and electrochemical properties of the coating deteriorated with prolonged exposure, coating aging mainly occurs with the generation of O-H bonds and the breakage of molecular chains such as C-N and C-O-C. The coating was most deeply aged after exposure to the Xisha, followed by Zhoushan and finally Qingdao. Environmental factors affect the photooxidative aging and hydrolytic degradation processes of coatings and thus coating aging. To further demonstrate the correlation between environmental factors and coating aging, principal component analysis was used. The correlation model between environmental factors and coating aging was subsequently obtained. The correlation model between the rate of coating adhesion loss (E) and the comprehensive evaluation parameter of environmental factors (Z) is expressed as E = 0.142 + 0.028Z. Meanwhile, the Spearman correlation analysis and gray correlation method were used to investigate the impact of each environmental factor on coating aging. Solar irradiation, relative humidity and wetting time have the highest correlation with coating aging, which are all above 0.8 and have the greatest influence on coating aging; wind speed and temperature have the smallest correlation with coating aging, which are about 0.6 and have the least influence on coating aging.
Originality/value
This paper establishes a correlation between typical marine environmental factors and coating aging performance, which is crucial for predicting the service life of other coatings in diverse environments.
Details
Keywords
Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…
Abstract
Purpose
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”
Design/methodology/approach
The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.
Findings
This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.
Originality/value
This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.
Details
Keywords
B.V. Binoy, M.A. Naseer and P.P. Anil Kumar
Land value varies at a micro level depending on the location’s economic, geographical and political determinants. The purpose of this study is to present a comprehensive…
Abstract
Purpose
Land value varies at a micro level depending on the location’s economic, geographical and political determinants. The purpose of this study is to present a comprehensive assessment of the determinants affecting land value in the Indian city of Thiruvananthapuram in the state of Kerala.
Design/methodology/approach
The global influence of the identified 20 explanatory variables on land value is measured using the traditional hedonic price modeling approach. The localized spatial variations of the influencing parameters are examined using the non-parametric regression method, geographically weighted regression. This study used advertised land value prices collected from Web sources and screened through field surveys.
Findings
Global regression results indicate that access to transportation facilities, commercial establishments, crime sources, wetland classification and disaster history has the strongest influence on land value in the study area. Local regression results demonstrate that the factors influencing land value are not stationary in the study area. Most variables have a different influence in Kazhakootam and the residential areas than in the central business district region.
Originality/value
This study confirms findings from previous studies and provides additional evidence in the spatial dynamics of land value creation. It is to be noted that advanced modeling approaches used in the research have not received much attention in Indian property valuation studies. The outcomes of this study have important implications for the property value fixation of urban Kerala. The regional variation of land value within an urban agglomeration shows the need for a localized method for land value calculation.
Details
Keywords
Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…
Abstract
Purpose
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.
Design/methodology/approach
In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.
Findings
The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.
Originality/value
The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.
Details
Keywords
Jan Svanberg, Tohid Ardeshiri, Isak Samsten, Peter Öhman, Presha E. Neidermeyer, Tarek Rana, Frank Maisano and Mats Danielson
The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted…
Abstract
Purpose
The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted arithmetic averages to combine a set of social performance (SP) indicators into one single rating. To overcome this problem, this study investigates the preconditions for a new methodology for rating the SP component of the ESG by applying machine learning (ML) and artificial intelligence (AI) anchored to social controversies.
Design/methodology/approach
This study proposes the use of a data-driven rating methodology that derives the relative importance of SP features from their contribution to the prediction of social controversies. The authors use the proposed methodology to solve the weighting problem with overall ESG ratings and further investigate whether prediction is possible.
Findings
The authors find that ML models are able to predict controversies with high predictive performance and validity. The findings indicate that the weighting problem with the ESG ratings can be addressed with a data-driven approach. The decisive prerequisite, however, for the proposed rating methodology is that social controversies are predicted by a broad set of SP indicators. The results also suggest that predictively valid ratings can be developed with this ML-based AI method.
Practical implications
This study offers practical solutions to ESG rating problems that have implications for investors, ESG raters and socially responsible investments.
Social implications
The proposed ML-based AI method can help to achieve better ESG ratings, which will in turn help to improve SP, which has implications for organizations and societies through sustainable development.
Originality/value
To the best of the authors’ knowledge, this research is one of the first studies that offers a unique method to address the ESG rating problem and improve sustainability by focusing on SP indicators.
Details
Keywords
Shrutika Sharma, Vishal Gupta, Deepa Mudgal and Vishal Srivastava
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to…
Abstract
Purpose
Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to experiment with different printing settings. The current study aims to propose a regression-based machine learning model to predict the mechanical behavior of ulna bone plates.
Design/methodology/approach
The bone plates were formed using fused deposition modeling (FDM) technique, with printing attributes being varied. The machine learning models such as linear regression, AdaBoost regression, gradient boosting regression (GBR), random forest, decision trees and k-nearest neighbors were trained for predicting tensile strength and flexural strength. Model performance was assessed using root mean square error (RMSE), coefficient of determination (R2) and mean absolute error (MAE).
Findings
Traditional experimentation with various settings is both time-consuming and expensive, emphasizing the need for alternative approaches. Among the models tested, GBR model demonstrated the best performance in predicting both tensile and flexural strength and achieved the lowest RMSE, highest R2 and lowest MAE, which are 1.4778 ± 0.4336 MPa, 0.9213 ± 0.0589 and 1.2555 ± 0.3799 MPa, respectively, and 3.0337 ± 0.3725 MPa, 0.9269 ± 0.0293 and 2.3815 ± 0.2915 MPa, respectively. The findings open up opportunities for doctors and surgeons to use GBR as a reliable tool for fabricating patient-specific bone plates, without the need for extensive trial experiments.
Research limitations/implications
The current study is limited to the usage of a few models. Other machine learning-based models can be used for prediction-based study.
Originality/value
This study uses machine learning to predict the mechanical properties of FDM-based distal ulna bone plate, replacing traditional design of experiments methods with machine learning to streamline the production of orthopedic implants. It helps medical professionals, such as physicians and surgeons, make informed decisions when fabricating customized bone plates for their patients while reducing the need for time-consuming experimentation, thereby addressing a common limitation of 3D printing medical implants.
Details