Search results

1 – 10 of 68
Article
Publication date: 28 May 2024

Mahlagha Darvishmotevali, Hasan Evrim Arici and Mehmet Ali Koseoglu

Informed by trait and self-determination theories, the present study aims to extend the knowledge regarding the link between customer satisfaction (CS) and its antecedents…

Abstract

Purpose

Informed by trait and self-determination theories, the present study aims to extend the knowledge regarding the link between customer satisfaction (CS) and its antecedents, including job autonomy (JA), conscientiousness, customer uncertainty (CU) and extra-role customer service (E-RCS) in the hospitality industry.

Design/methodology/approach

A total of 306 frontline employees were selected from the hotels in North Cyprus, Turkey. Psychometric properties, including the validity and reliability of study variables, were assessed in the first step using confirmatory factor analysis. Then, the data were analyzed utilizing machine learning methods, mainly three exploratory data mining techniques, including lasso regression, decision trees and random forest, as well as partial dependence plots to visualize the role of suggested predictors on the outcome variable.

Findings

Data mining analysis shows that employees who can modify their job objectives are better equipped to satisfy customers in uncertain situations (JA8). In addition, the findings reveal that employees who believe they work hard to accomplish their personal and organizational goals (CON7) while also having the freedom to decide how to approach their job (JA1) and choose the procedures to utilize (JA2) are more likely to contribute to CS. In general, CS peaked when JA was high, but conscientiousness was moderate, while CU was low.

Practical implications

This study bridges the gap among various factors at the employee and customer individual, corporate and macro-environmental levels. Hospitality organizations can cultivate a culture of autonomy and independence by promoting open communication and offering growth and development opportunities. This approach enhances conscientious employees’ engagement, leading to exceptional customer service performance, particularly, in uncertain situations.

Originality/value

From the methodology perspective, this work proposes an opportunity for prospective scientists to broaden the trait and self-determination theories research model by relying on the riches of exploratory techniques without the limits imposed by traditional analytical techniques. Further, this study advances the current knowledge about service agility under uncertainty by extending organizational and service management research to consumer behavior literature.

Details

Journal of Hospitality and Tourism Insights, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9792

Keywords

Article
Publication date: 26 May 2022

Ismail Abiodun Sulaimon, Hafiz Alaka, Razak Olu-Ajayi, Mubashir Ahmad, Saheed Ajayi and Abdul Hye

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully…

278

Abstract

Purpose

Road traffic emissions are generally believed to contribute immensely to air pollution, but the effect of road traffic data sets on air quality (AQ) predictions has not been fully investigated. This paper aims to investigate the effects traffic data set have on the performance of machine learning (ML) predictive models in AQ prediction.

Design/methodology/approach

To achieve this, the authors have set up an experiment with the control data set having only the AQ data set and meteorological (Met) data set, while the experimental data set is made up of the AQ data set, Met data set and traffic data set. Several ML models (such as extra trees regressor, eXtreme gradient boosting regressor, random forest regressor, K-neighbors regressor and two others) were trained, tested and compared on these individual combinations of data sets to predict the volume of PM2.5, PM10, NO2 and O3 in the atmosphere at various times of the day.

Findings

The result obtained showed that various ML algorithms react differently to the traffic data set despite generally contributing to the performance improvement of all the ML algorithms considered in this study by at least 20% and an error reduction of at least 18.97%.

Research limitations/implications

This research is limited in terms of the study area, and the result cannot be generalized outside of the UK as some of the inherent conditions may not be similar elsewhere. Additionally, only the ML algorithms commonly used in literature are considered in this research, therefore, leaving out a few other ML algorithms.

Practical implications

This study reinforces the belief that the traffic data set has a significant effect on improving the performance of air pollution ML prediction models. Hence, there is an indication that ML algorithms behave differently when trained with a form of traffic data set in the development of an AQ prediction model. This implies that developers and researchers in AQ prediction need to identify the ML algorithms that behave in their best interest before implementation.

Originality/value

The result of this study will enable researchers to focus more on algorithms of benefit when using traffic data sets in AQ prediction.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 31 August 2023

Faisal Mehraj Wani, Jayaprakash Vemuri and Rajaram Chenna

Near-fault pulse-like ground motions have distinct and very severe effects on reinforced concrete (RC) structures. However, there is a paucity of recorded data from Near-Fault…

Abstract

Purpose

Near-fault pulse-like ground motions have distinct and very severe effects on reinforced concrete (RC) structures. However, there is a paucity of recorded data from Near-Fault Ground Motions (NFGMs), and thus forecasting the dynamic seismic response of structures, using conventional techniques, under such intense ground motions has remained a challenge.

Design/methodology/approach

The present study utilizes a 2D finite element model of an RC structure subjected to near-fault pulse-like ground motions with a focus on the storey drift ratio (SDR) as the key demand parameter. Five machine learning classifiers (MLCs), namely decision tree, k-nearest neighbor, random forest, support vector machine and Naïve Bayes classifier , were evaluated to classify the damage states of the RC structure.

Findings

The results such as confusion matrix, accuracy and mean square error indicate that the Naïve Bayes classifier model outperforms other MLCs with 80.0% accuracy. Furthermore, three MLC models with accuracy greater than 75% were trained using a voting classifier to enhance the performance score of the models. Finally, a sensitivity analysis was performed to evaluate the model's resilience and dependability.

Originality/value

The objective of the current study is to predict the nonlinear storey drift demand for low-rise RC structures using machine learning techniques, instead of labor-intensive nonlinear dynamic analysis.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 4 June 2024

Rami Al-Jarrah and Faris M. AL-Oqla

This work introduces an integrated artificial intelligence schemes to enhance accurately predicting the mechanical properties of cellulosic fibers towards boosting their…

Abstract

Purpose

This work introduces an integrated artificial intelligence schemes to enhance accurately predicting the mechanical properties of cellulosic fibers towards boosting their reliability for more sustainable industries.

Design/methodology/approach

Fuzzy clustering and stacked method approach were utilized to predict the mechanical performance of the fibers. A reference dataset contains comprehensive information regarding mechanical behavior of the lignocellulosic fibers was compiled from previous experimental investigations on mechanical properties for eight different fiber materials. Data encompass three key factors: Density of 0.9–1.6 g/cm3, Diameter of 5.9–1,000 µm, and Microfibrillar angle of 2–49 deg were utilized. Initially, fuzzy clustering technique was utilized for the data. For validating proposed model, ultimate tensile strength and elongation at break were predicted and then examined against unseen new data that had not been used during model development.

Findings

The output results demonstrated remarkably accurate and highly acceptable predictions results. The error analysis for the proposed method was discussed by using statistical criteria. The stacked model proved to be effective in significantly reducing level of uncertainty in predicting the mechanical properties, thereby enhancing model’s reliability and precision. The study demonstrates the robustness and efficacy of the stacked method in accurately estimating mechanical properties of lignocellulosic fibers, making it a valuable tool for material scientists and engineers in various applications.

Originality/value

Cellulosic fibers are essential for biomaterials to enhance developing green sustainable bio-products. However, such fibers have diverse characteristics according to their types, chemical composition and structure causing inconsistent mechanical performance. This work introduces an integrated artificial intelligence schemes to enhance accurately predicting the mechanical properties of cellulosic fibers towards boosting their reliability for more sustainable industries. Fuzzy clustering and stacked method approach were utilized to predict the mechanical performance of the fibers.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 16 May 2024

Oscar F. Bustinza, Ferran Vendrell-Herrero, Philip Davies and Glenn Parry

Responding to calls for deeper analysis of the conceptual foundations of service infusion in manufacturing, this paper examines the underlying assumptions that: (i) manufacturing…

Abstract

Purpose

Responding to calls for deeper analysis of the conceptual foundations of service infusion in manufacturing, this paper examines the underlying assumptions that: (i) manufacturing firms incorporating services follow a pathway, moving from pure-product to pure-service offerings, and (ii) profits increase linearly with this process. We propose that these assumptions are inconsistent with the premises of behavioural and learning theories.

Design/methodology/approach

Machine learning algorithms are applied to test whether a successive process, from a basic to a more advanced offering, creates optimal performance. The data were gathered through two surveys administered to USA manufacturing firms in 2021 and 2023. The first included a training sample comprising 225 firms, whilst the second encompassed a testing sample of 105 firms.

Findings

Analysis shows that following the base-intermediate-advanced services pathway is not the best predictor of optimal performance. Developing advanced services and then later adding less complex offerings supports better performance.

Practical implications

Manufacturing firms follow heterogeneous pathways in their service development journey. Non-servitised firms need to carefully consider their contextual conditions when selecting their initial service offering. Starting with a single service offering appears to be a superior strategy over providing multiple services.

Originality/value

The machine learning approach is novel to the field and captures the key conditions for manufacturers to successfully servitise. Insight is derived from the adoption and implementation year datasets for 17 types of services described in previous qualitative studies. The methods proposed can be extended to assess other process-based models in related management fields (e.g., sand cone).

Details

International Journal of Operations & Production Management, vol. 44 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

Open Access
Article
Publication date: 12 January 2024

Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

1059

Abstract

Purpose

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Design/methodology/approach

A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.

Findings

The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.

Practical implications

The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.

Originality/value

The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?

Details

International Journal of Operations & Production Management, vol. 44 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

Open Access
Article
Publication date: 26 April 2024

Luís Jacques de Sousa, João Poças Martins and Luís Sanhudo

Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s…

Abstract

Purpose

Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s financial compliance. Predicting budget compliance in construction projects has been traditionally challenging, but Machine Learning (ML) techniques have revolutionised estimations.

Design/methodology/approach

In this study, Portuguese Public Procurement Data (PPPData) was utilised as the model’s input. Notably, this dataset exhibited a substantial imbalance in the target feature. To address this issue, the study evaluated three distinct data balancing techniques: oversampling, undersampling, and the SMOTE method. Next, a comprehensive feature selection process was conducted, leading to the testing of five different algorithms for forecasting budget compliance. Finally, a secondary test was conducted, refining the features to include only those elements that procurement technicians can modify while also considering the two most accurate predictors identified in the previous test.

Findings

The findings indicate that employing the SMOTE method on the scraped data can achieve a balanced dataset. Furthermore, the results demonstrate that the Adam ANN algorithm outperformed others, boasting a precision rate of 68.1%.

Practical implications

The model can aid procurement technicians during the tendering phase by using historical data and analogous projects to predict performance.

Social implications

Although the study reveals that ML algorithms cannot accurately predict budget compliance using procurement data, they can still provide project owners with insights into the most suitable criteria, aiding decision-making. Further research should assess the model’s impact and capacity within the procurement workflow.

Originality/value

Previous research predominantly focused on forecasting budgets by leveraging data from the private construction execution phase. While some investigations incorporated procurement data, this study distinguishes itself by using an imbalanced dataset and anticipating compliance rather than predicting budgetary figures. The model predicts budget compliance by analysing qualitative and quantitative characteristics of public project contracts. The research paper explores various model architectures and data treatment techniques to develop a model to assist the Client in tender definition.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 3 November 2023

Vimala Balakrishnan, Aainaa Nadia Mohammed Hashim, Voon Chung Lee, Voon Hee Lee and Ying Qiu Lee

This study aims to develop a machine learning model to detect structure fire fatalities using a dataset comprising 11,341 cases from 2011 to 2019.

40

Abstract

Purpose

This study aims to develop a machine learning model to detect structure fire fatalities using a dataset comprising 11,341 cases from 2011 to 2019.

Design/methodology/approach

Exploratory data analysis (EDA) was conducted prior to modelling, in which ten machine learning models were experimented with.

Findings

The main fatal structure fire risk factors were fires originating from bedrooms, living areas and the cooking/dining areas. The highest fatality rate (20.69%) was reported for fires ignited due to bedding (23.43%), despite a low fire incident rate (3.50%). Using 21 structure fire features, Random Forest (RF) yielded the best detection performance with 86% accuracy, followed by Decision Tree (DT) with bagging (accuracy = 84.7%).

Research limitations/practical implications

Limitations of the study are pertaining to data quality and grouping of categories in the data pre-processing stage, which could affect the performance of the models.

Originality/value

The study is the first of its kind to manipulate risk factors to detect fatal structure classification, particularly focussing on structure fire fatalities. Most of the previous studies examined the importance of fire risk factors and their relationship to the fire risk level.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 2 May 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 November 2023

Xiaojie Xu and Yun Zhang

The Chinese housing market has gone through rapid growth during the past decade, and house price forecasting has evolved to be a significant issue that draws enormous attention…

37

Abstract

Purpose

The Chinese housing market has gone through rapid growth during the past decade, and house price forecasting has evolved to be a significant issue that draws enormous attention from investors, policy makers and researchers. This study investigates neural networks for composite property price index forecasting from ten major Chinese cities for the period of July 2005–April 2021.

Design/methodology/approach

The goal is to build simple and accurate neural network models that contribute to pure technical forecasts of composite property prices. To facilitate the analysis, the authors consider different model settings across algorithms, delays, hidden neurons and data spitting ratios.

Findings

The authors arrive at a pretty simple neural network with six delays and three hidden neurons, which generates rather stable performance of average relative root mean square errors across the ten cities below 1% for the training, validation and testing phases.

Originality/value

Results here could be utilized on a standalone basis or combined with fundamental forecasts to help form perspectives of composite property price trends and conduct policy analysis.

Details

Property Management, vol. 42 no. 3
Type: Research Article
ISSN: 0263-7472

Keywords

1 – 10 of 68