Search results
1 – 10 of 19Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…
Abstract
Purpose
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.
Design/methodology/approach
In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.
Findings
The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.
Originality/value
The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.
Details
Keywords
Gang Li, Qiqi Zheng and Mengyao Xia
Due to the fact that most employees have been forced to work remotely during the lockdown resulting from the COVID-19 pandemic, there is great concern about how to alleviate…
Abstract
Purpose
Due to the fact that most employees have been forced to work remotely during the lockdown resulting from the COVID-19 pandemic, there is great concern about how to alleviate increased stress among employees through human resource (HR) practices. Drawing upon the job demands-control (JDC) model and the job demands-resources (JDR) model, this study empirically investigated the direct effect of HR practices on employee stress in enforced remote work and the mediating role of sources of stress (SoS) and sense of control (SoC).
Design/methodology/approach
Data were collected through an online survey platform called Wenjuanxing from March 15 to 22, 2020 in Hubei, China and from April 22 to 29, 2022 in Shanghai, China. Respondents scanned the QR code on WeChat to enter the platform. A total of 511 valid questionnaires were received with a response rate of 75.4%. After controlling demographic variables, the authors used the mediation modeling and PROCESS tool to test the proposed hypotheses.
Findings
HR practices negatively affect stress in enforced remote work among employees. Both SoS and SoC partially mediate the relationship between HR practices and stress. HR practices can alleviate stress via decreasing SoS and enhancing SoC, respectively. Moreover, employee care and training are found to be two key factors of HR practices to help employees alleviate stress in enforced remote work.
Originality/value
Lockdown as an extreme external condition has brought great challenges in employee work arrangement as well as HR practices. Although the relationship between HR practices and job stress was studied previously, there is a lack of research on the effects of HR practices on stress in enforced remote work due to lockdown. It advances knowledge on HR practices' stress-reducing effect in the context of remote work and provides suggestions for HR practitioners on ways of alleviating employee stress in remote work.
Details
Keywords
Yuxin He, Yang Zhao and Kwok Leung Tsui
Exploring the influencing factors on urban rail transit (URT) ridership is vital for travel demand estimation and urban resources planning. Among various existing ridership…
Abstract
Purpose
Exploring the influencing factors on urban rail transit (URT) ridership is vital for travel demand estimation and urban resources planning. Among various existing ridership modeling methods, direct demand model with ordinary least square (OLS) multiple regression as a representative has considerable advantages over the traditional four-step model. Nevertheless, OLS multiple regression neglects spatial instability and spatial heterogeneity from the magnitude of the coefficients across the urban area. This paper aims to focus on modeling and analyzing the factors influencing metro ridership at the station level.
Design/methodology/approach
This paper constructs two novel direct demand models based on geographically weighted regression (GWR) for modeling influencing factors on metro ridership from a local perspective. One is GWR with globally implemented LASSO for feature selection, and the other one is geographically weighted LASSO (GWL) model, which is GWR with locally implemented LASSO for feature selection.
Findings
The results of real-world case study of Shenzhen Metro show that the two local models presented perform better than the traditional global model (OLS) in terms of estimation error of ridership and goodness-of-fit. Additionally, the GWL model results in a better fit than GWR with global LASSO model, indicating that the locally implemented LASSO is more effective for the accurate estimation of Shenzhen metro ridership than global LASSO does. Moreover, the information provided by both two local models regarding the spatial varied elasticities demonstrates the strong spatial interpretability of models and potentials in transport planning.
Originality/value
The main contributions are threefold: the approach is based on spatial models considering spatial autocorrelation of variables, which outperform the traditional global regression model – OLS – in terms of model fitting and spatial explanatory power. GWR with global feature selection using LASSO and GWL is compared through a real-world case study on Shenzhen Metro, that is, the difference between global feature selection and local feature selection is discussed. Network structures as a type of factors are quantified with the measurements in the field of complex network.
Details
Keywords
Ushapreethi P and Lakshmi Priya G G
To find a successful human action recognition system (HAR) for the unmanned environments.
Abstract
Purpose
To find a successful human action recognition system (HAR) for the unmanned environments.
Design/methodology/approach
This paper describes the key technology of an efficient HAR system. In this paper, the advancements for three key steps of the HAR system are presented to improve the accuracy of the existing HAR systems. The key steps are feature extraction, feature descriptor and action classification, which are implemented and analyzed. The usage of the implemented HAR system in the self-driving car is summarized. Finally, the results of the HAR system and other existing action recognition systems are compared.
Findings
This paper exhibits the proposed modification and improvements in the HAR system, namely the skeleton-based spatiotemporal interest points (STIP) feature and the improved discriminative sparse descriptor for the identified feature and the linear action classification.
Research limitations/implications
The experiments are carried out on captured benchmark data sets and need to be analyzed in a real-time environment.
Practical implications
The middleware support between the proposed HAR system and the self-driven car system provides several other challenging opportunities in research.
Social implications
The authors’ work provides the way to go a step ahead in machine vision especially in self-driving cars.
Originality/value
The method for extracting the new feature and constructing an improved discriminative sparse feature descriptor has been introduced.
Details
Keywords
With the prosperity of grey extension models, the form and structure of grey forecasting models tend to be complicated. How to select the appropriate model structure according to…
Abstract
Purpose
With the prosperity of grey extension models, the form and structure of grey forecasting models tend to be complicated. How to select the appropriate model structure according to the data characteristics has become an important topic. The purpose of this paper is to design a structure selection method for the grey multivariate model.
Design/methodology/approach
The linear correction term is introduced into the grey model, then the nonhomogeneous grey multivariable model with convolution integral [NGMC(1,N)] is proposed. Then, by incorporating the least absolute shrinkage and selection operator (LASSO), the model parameters are compressed and estimated based on the least angle regression (LARS) algorithm.
Findings
By adjusting the values of the parameters, the NGMC(1,N) model can derive various structures of grey models, which shows the structural adaptability of the NGMC(1,N) model. Based on the geometric interpretation of the LASSO method, the structure selection of the grey model can be transformed into sparse parameter estimation, and the structure selection can be realized by LASSO estimation.
Practical implications
This paper not only provides an effective method to identify the key factors of the agricultural drought vulnerability, but also presents a practical model to predict the agricultural drought vulnerability.
Originality/value
Based on the LASSO method, a structure selection algorithm for the NGMC(1,N) model is designed, and the structure selection method is applied to the vulnerability prediction of agricultural drought in Puyang City, Henan Province.
Details
Keywords
Ahmad Mozaffari, Nasser Lashgarian Azad and Alireza Fathi
The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty…
Abstract
Purpose
The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty function, regularization laws are embedded into the structure of common least square solutions to increase the numerical stability, sparsity, accuracy and robustness of regression weights. Several regularization techniques have been proposed so far which have their own advantages and disadvantages. Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques. However, the proposed numerical and deterministic approaches need certain knowledge of mathematical programming, and also do not guarantee the global optimality of the obtained solution. In this research, the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine (ELM).
Design/methodology/approach
To implement the required tools for comparative numerical study, three steps are taken. The considered algorithms contain both classical and swarm and evolutionary approaches. For the classical regularization techniques, Lasso regularization, Tikhonov regularization, cascade Lasso-Tikhonov regularization, and elastic net are considered. For swarm and evolutionary-based regularization, an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered, and its algorithmic structure is modified so that it can efficiently perform the regularized learning. Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme. To test the efficacy of the proposed constraint evolutionary-based regularization technique, a wide range of regression problems are used. Besides, the proposed framework is applied to a real-life identification problem, i.e. identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine, for further assurance on the performance of the proposed scheme.
Findings
Through extensive numerical study, it is observed that the proposed scheme can be easily used for regularized machine learning. It is indicated that by defining a proper objective function and considering an appropriate penalty function, near global optimum values of regressors can be easily obtained. The results attest the high potentials of swarm and evolutionary techniques for fast, accurate and robust regularized machine learning.
Originality/value
The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine (OP-ELM). The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system, and also increases the degree of the automation of OP-ELM. Besides, by using different types of metaheuristics, it is demonstrated that the proposed methodology is a general flexible scheme, and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.
Details
Keywords
Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the…
Abstract
Survival (default) data are frequently encountered in financial (especially credit risk), medical, educational, and other fields, where the “default” can be interpreted as the failure to fulfill debt payments of a specific company or the death of a patient in a medical study or the inability to pass some educational tests.
This paper introduces the basic ideas of Cox's original proportional model for the hazard rates and extends the model within a general framework of statistical data mining procedures. By employing regularization, basis expansion, boosting, bagging, Markov chain Monte Carlo (MCMC) and many other tools, we effectively calibrate a large and flexible class of proportional hazard models.
The proposed methods have important applications in the setting of credit risk. For example, the model for the default correlation through regularization can be used to price credit basket products, and the frailty factor models can explain the contagion effects in the defaults of multiple firms in the credit market.
Philip Kostov, Thankom Arun and Samuel Annim
This paper aims to understand household’s latent behaviour decision-making in accessing financial services. In this analysis, the determinants of the choice of the pre-entry…
Abstract
Purpose
This paper aims to understand household’s latent behaviour decision-making in accessing financial services. In this analysis, the determinants of the choice of the pre-entry Mzansi account by consumers in South Africa is looked at.
Design/methodology/approach
In this study, 102 variables, grouped in the following categories: basic literacy, understanding financial terms, targets for financial advice, desired financial education and financial perception. Using a computationally efficient variable selection algorithm, variables that can satisfactorily explain the choice of a Mzansi account were studied.
Findings
The Mzansi intervention is appealing to individuals with basic but insufficient financial education. Aspirations seem to be very influential in revealing the choice of financial services, and, to this end, Mzansi is perceived as a pre-entry account not meeting the aspirations of individuals aiming to climb up the financial services ladder. It was found that Mzansi holders view the account mainly as a vehicle for receiving payments, but, on the other hand, are debt-averse and inclined to save. Hence, although there is at present no concrete evidence that the Mzansi intervention increases access to finance via diversification (i.e. by recruiting customers into higher-level accounts and services), this analysis shows that this is very likely to be the case.
Originality/value
The issue of demand-side constraints on access to finance have been largely been ignored in the theoretical and empirical literature. This paper undertakes some preliminary steps in addressing this gap.
Details
Keywords
Jiandong Zhou, Xiang Li, Xiande Zhao and Liang Wang
The purpose of this paper is to deal with the practical challenge faced by modern logistics enterprises to accurately evaluate driving performance with high computational…
Abstract
Purpose
The purpose of this paper is to deal with the practical challenge faced by modern logistics enterprises to accurately evaluate driving performance with high computational efficiency under the disturbance of road smoothness and to identify significantly associated performance influence factors.
Design/methodology/approach
The authors cooperate with a logistics server (G7) and establish a driving grading system by constructing real-time inertial navigation data-enabled indicators for both driving behaviour (times of aggressive speed change and times of lane change) and road smoothness (average speed and average vibration times of the vehicle body).
Findings
The developed driving grading system demonstrates highly accurate evaluations in practical use. Data analytics on the constructed indicators prove the significances of both driving behaviour heterogeneity and the road smoothness effect on objective driving grading. The methodologies are validated with real-life tests on different types of vehicles, and are confirmed to be quite effective in practical tests with 95% accuracy according to prior benchmarks. Data analytics based on the grading system validate the hypotheses of the driving fatigue effect, daily traffic periods impact and transition effect. In addition, the authors empirically distinguish the impact strength of external factors (driving time, rainfall and humidity, wind speed, and air quality) on driving performance.
Practical implications
This study has good potential for providing objective driving grading as required by the modern logistics industry to improve transparent management efficiency with real-time vehicle data.
Originality/value
This study contributes to the existing research by comprehensively measuring both road smoothness and driving performance in the driving grading system in the modern logistics industry.
Details
Keywords
Forecasts from dynamic factor models potentially benefit from refining the data set by eliminating uninformative series. This paper proposes to use prediction weights as provided…
Abstract
Forecasts from dynamic factor models potentially benefit from refining the data set by eliminating uninformative series. This paper proposes to use prediction weights as provided by the factor model itself for this purpose. Monte Carlo simulations and an empirical application to short-term forecasts of euro area, German, and French GDP growth from unbalanced monthly data suggest that both prediction weights and least angle regressions result in improved nowcasts. Overall, prediction weights provide yet more robust results.
Details