Search results

1 – 10 of 941
Article
Publication date: 19 January 2023

Hamidreza Golabchi and Ahmed Hammad

Existing labor estimation models typically consider only certain construction project types or specific influencing factors. These models are focused on quantifying the total…

Abstract

Purpose

Existing labor estimation models typically consider only certain construction project types or specific influencing factors. These models are focused on quantifying the total labor hours required, while the utilization rate of the labor during the project is not usually accounted for. This study aims to develop a novel machine learning model to predict the time series of labor resource utilization rate at the work package level.

Design/methodology/approach

More than 250 construction work packages collected over a two-year period are used to identify the main contributing factors affecting labor resource requirements. Also, a novel machine learning algorithm – Recurrent Neural Network (RNN) – is adopted to develop a forecasting model that can predict the utilization of labor resources over time.

Findings

This paper presents a robust machine learning approach for predicting labor resources’ utilization rates in construction projects based on the identified contributing factors. The machine learning approach is found to result in a reliable time series forecasting model that uses the RNN algorithm. The proposed model indicates the capability of machine learning algorithms in facilitating the traditional challenges in construction industry.

Originality/value

The findings point to the suitability of state-of-the-art machine learning techniques for developing predictive models to forecast the utilization rate of labor resources in construction projects, as well as for supporting project managers by providing forecasting tool for labor estimations at the work package level before detailed activity schedules have been generated. Accordingly, the proposed approach facilitates resource allocation and enables prioritization of available resources to enhance the overall performance of projects.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 28 March 2024

Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…

Abstract

Purpose

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”

Design/methodology/approach

The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.

Findings

This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.

Originality/value

This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 29 January 2024

Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…

Abstract

Purpose

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.

Design/methodology/approach

This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.

Findings

The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.

Originality/value

A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2633-6596

Keywords

Article
Publication date: 11 December 2023

Chi-Un Lei, Wincy Chan and Yuyue Wang

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…

Abstract

Purpose

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.

Design/methodology/approach

In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.

Findings

The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.

Research limitations/implications

The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.

Originality/value

The proposed approach explores the possibility of using machine learning for SDG classifications in scale.

Details

International Journal of Sustainability in Higher Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1467-6370

Keywords

Article
Publication date: 18 September 2023

Fatma Ben Hamadou, Taicir Mezghani, Ramzi Zouari and Mouna Boujelbène-Abbes

This study aims to assess the predictive performance of various factors on Bitcoin returns, used for the development of a robust forecasting support decision model using machine…

Abstract

Purpose

This study aims to assess the predictive performance of various factors on Bitcoin returns, used for the development of a robust forecasting support decision model using machine learning techniques, before and during the COVID-19 pandemic. More specifically, the authors investigate the impact of the investor's sentiment on forecasting the Bitcoin returns.

Design/methodology/approach

This method uses feature selection techniques to assess the predictive performance of the different factors on the Bitcoin returns. Subsequently, the authors developed a forecasting model for the Bitcoin returns by evaluating the accuracy of three machine learning models, namely the one-dimensional convolutional neural network (1D-CNN), the bidirectional deep learning long short-term memory (BLSTM) neural networks and the support vector machine model.

Findings

The findings shed light on the importance of the investor's sentiment in enhancing the accuracy of the return forecasts. Furthermore, the investor's sentiment, the economic policy uncertainty (EPU), gold and the financial stress index (FSI) are the top best determinants before the COVID-19 outbreak. However, there was a significant decrease in the importance of financial uncertainty (FSI and EPU) during the COVID-19 pandemic, proving that investors attach much more importance to the sentimental side than to the traditional uncertainty factors. Regarding the forecasting model accuracy, the authors found that the 1D-CNN model showed the lowest prediction error before and during the COVID-19 and outperformed the other models. Therefore, it represents the best-performing algorithm among its tested counterparts, while the BLSTM is the least accurate model.

Practical implications

Moreover, this study contributes to a better understanding relevant for investors and policymakers to better forecast the returns based on a forecasting model, which can be used as a decision-making support tool. Therefore, the obtained results can drive the investors to uncover potential determinants, which forecast the Bitcoin returns. It actually gives more weight to the sentiment rather than financial uncertainties factors during the pandemic crisis.

Originality/value

To the authors’ knowledge, this is the first study to have attempted to construct a novel crypto sentiment measure and use it to develop a Bitcoin forecasting model. In fact, the development of a robust forecasting model, using machine learning techniques, offers a practical value as a decision-making support tool for investment strategies and policy formulation.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Open Access
Article
Publication date: 2 April 2024

Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…

Abstract

Purpose

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.

Design/methodology/approach

On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.

Findings

The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.

Originality/value

The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 28 February 2024

Magdalena Saldana-Perez, Giovanni Guzmán, Carolina Palma-Preciado, Amadeo Argüelles-Cruz and Marco Moreno-Ibarra

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the…

Abstract

Purpose

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the United Nations, only a few cities have been planned taking into account the climate changes indices. This paper aims to study climatic variations, how climate conditions might change in the future and how these changes will affect the activities and living conditions in cities, specifically focusing on Mexico city.

Design/methodology/approach

In this approach, two distinct machine learning regression models, k-Nearest Neighbors and Support Vector Regression, were used to predict variations in climate change indices within select urban areas of Mexico city. The calculated indices are based on maximum, minimum and average temperature data collected from the National Water Commission in Mexico and the Scientific Research Center of Ensenada. The methodology involves pre-processing temperature data to create a training data set for regression algorithms. It then computes predictions for each temperature parameter and ultimately assesses the performance of these algorithms based on precision metrics scores.

Findings

This paper combines a geospatial perspective with computational tools and machine learning algorithms. Among the two regression algorithms used, it was observed that k-Nearest Neighbors produced superior results, achieving an R2 score of 0.99, in contrast to Support Vector Regression, which yielded an R2 score of 0.74.

Originality/value

The full potential of machine learning algorithms has not been fully harnessed for predicting climate indices. This paper also identifies the strengths and weaknesses of each algorithm and how the generated estimations can then be considered in the decision-making process.

Details

Transforming Government: People, Process and Policy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 7 January 2020

Othmane Touri, Rida Ahroum and Boujemâa Achchab

The displaced commercial risk is one of the specific risks in the Islamic finance that creates a serious debate among practitioners and researchers about its management. The…

Abstract

Purpose

The displaced commercial risk is one of the specific risks in the Islamic finance that creates a serious debate among practitioners and researchers about its management. The purpose of this paper is to assess a new approach to manage this risk using machine learning algorithms.

Design/methodology/approach

To attempt this purpose, the authors use several machine learning algorithms applied to a set of financial data related to banks from different regions and consider the deposit variation intensity as an indicator.

Findings

Results show acceptable prediction accuracy. The model could be used to optimize the prudential reserves for banks and the incomes distributed to depositors.

Research limitations/implications

However, the model uses several variables as proxies since data are not available for some specific indicators, such as the profit equalization reserves and the investment risk reserves.

Originality/value

Previous studies have analyzed the origin and impact of DCR. To the best of authors’ knowledge, none of them has provided an ex ante management tool for this risk. Furthermore, the authors suggest the use of a new approach based on machine learning algorithms.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 5 April 2024

Melike Artar, Yavuz Selim Balcioglu and Oya Erdil

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of…

Abstract

Purpose

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of focusing solely on obvious factors, such as qualifications and experience, our model also considers various dimensions of fit, including person-job fit and person-organization fit. By integrating these dimensions of fit into the model, we can better predict a candidate’s potential contribution to the organization, hence enhancing the Quality of Hire.

Design/methodology/approach

Within the scope of the investigation, the competencies of the personnel working in the IT department of one in the largest state banks of the country were used. The entire data collection includes information on 1,850 individual employees as well as 13 different characteristics. For analysis, Python’s “keras” and “seaborn” modules were used. The Gower coefficient was used to determine the distance between different records.

Findings

The K-NN method resulted in the formation of five clusters, represented as a scatter plot. The axis illustrates the cohesion that exists between things (employees) that are similar to one another and the separateness that exists between things that have their own individual identities. This shows that the clustering process is effective in improving both the degree of similarity within each cluster and the degree of dissimilarity between clusters.

Research limitations/implications

Employee competencies were evaluated within the scope of the investigation. Additionally, other criteria requested from the employee were not included in the application.

Originality/value

This study will be beneficial for academics, professionals, and researchers in their attempts to overcome the ongoing obstacles and challenges related to the securing the proper talent for an organization. In addition to creating a mechanism to use big data in the form of structured and unstructured data from multiple sources and deriving insights using ML algorithms, it contributes to the debates on the quality of hire in an entire organization. This is done in addition to developing a mechanism for using big data in the form of structured and unstructured data from multiple sources.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

1 – 10 of 941