Search results

1 – 10 of over 1000
Article
Publication date: 19 January 2023

Hamidreza Golabchi and Ahmed Hammad

Existing labor estimation models typically consider only certain construction project types or specific influencing factors. These models are focused on quantifying the total…

Abstract

Purpose

Existing labor estimation models typically consider only certain construction project types or specific influencing factors. These models are focused on quantifying the total labor hours required, while the utilization rate of the labor during the project is not usually accounted for. This study aims to develop a novel machine learning model to predict the time series of labor resource utilization rate at the work package level.

Design/methodology/approach

More than 250 construction work packages collected over a two-year period are used to identify the main contributing factors affecting labor resource requirements. Also, a novel machine learning algorithm – Recurrent Neural Network (RNN) – is adopted to develop a forecasting model that can predict the utilization of labor resources over time.

Findings

This paper presents a robust machine learning approach for predicting labor resources’ utilization rates in construction projects based on the identified contributing factors. The machine learning approach is found to result in a reliable time series forecasting model that uses the RNN algorithm. The proposed model indicates the capability of machine learning algorithms in facilitating the traditional challenges in construction industry.

Originality/value

The findings point to the suitability of state-of-the-art machine learning techniques for developing predictive models to forecast the utilization rate of labor resources in construction projects, as well as for supporting project managers by providing forecasting tool for labor estimations at the work package level before detailed activity schedules have been generated. Accordingly, the proposed approach facilitates resource allocation and enables prioritization of available resources to enhance the overall performance of projects.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 20 July 2023

Mu Shengdong, Liu Yunjie and Gu Jijian

By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold…

Abstract

Purpose

By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold start problem of entrepreneurial borrowing risk control.

Design/methodology/approach

The authors introduce semi-supervised learning and integrated learning into the field of migration learning, and innovatively propose the Stacking model migration learning, which can independently train models on entrepreneurial borrowing credit data, and then use the migration strategy itself as the learning object, and use the Stacking algorithm to combine the prediction results of the source domain model and the target domain model.

Findings

The effectiveness of the two migration learning models is evaluated with real data from an entrepreneurial borrowing. The algorithmic performance of the Stacking-based model migration learning is further improved compared to the benchmark model without migration learning techniques, with the model area under curve value rising to 0.8. Comparing the two migration learning models reveals that the model-based migration learning approach performs better. The reason for this is that the sample-based migration learning approach only eliminates the noisy samples that are relatively less similar to the entrepreneurial borrowing data. However, the calculation of similarity and the weighing of similarity are subjective, and there is no unified judgment standard and operation method, so there is no guarantee that the retained traditional credit samples have the same sample distribution and feature structure as the entrepreneurial borrowing data.

Practical implications

From a practical standpoint, on the one hand, it provides a new solution to the cold start problem of entrepreneurial borrowing risk control. The small number of labeled high-quality samples cannot support the learning and deployment of big data risk control models, which is the cold start problem of the entrepreneurial borrowing risk control system. By extending the training sample set with auxiliary domain data through suitable migration learning methods, the prediction performance of the model can be improved to a certain extent and more generalized laws can be learned.

Originality/value

This paper introduces the thought method of migration learning to the entrepreneurial borrowing scenario, provides a new solution to the cold start problem of the entrepreneurial borrowing risk control system and verifies the feasibility and effectiveness of the migration learning method applied in the risk control field through empirical data.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 28 February 2024

Magdalena Saldana-Perez, Giovanni Guzmán, Carolina Palma-Preciado, Amadeo Argüelles-Cruz and Marco Moreno-Ibarra

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the…

Abstract

Purpose

Climate change is a problem that concerns all of us. Despite the information produced by organizations such as the Expert Team on Climate Change Detection and Indices and the United Nations, only a few cities have been planned taking into account the climate changes indices. This paper aims to study climatic variations, how climate conditions might change in the future and how these changes will affect the activities and living conditions in cities, specifically focusing on Mexico city.

Design/methodology/approach

In this approach, two distinct machine learning regression models, k-Nearest Neighbors and Support Vector Regression, were used to predict variations in climate change indices within select urban areas of Mexico city. The calculated indices are based on maximum, minimum and average temperature data collected from the National Water Commission in Mexico and the Scientific Research Center of Ensenada. The methodology involves pre-processing temperature data to create a training data set for regression algorithms. It then computes predictions for each temperature parameter and ultimately assesses the performance of these algorithms based on precision metrics scores.

Findings

This paper combines a geospatial perspective with computational tools and machine learning algorithms. Among the two regression algorithms used, it was observed that k-Nearest Neighbors produced superior results, achieving an R2 score of 0.99, in contrast to Support Vector Regression, which yielded an R2 score of 0.74.

Originality/value

The full potential of machine learning algorithms has not been fully harnessed for predicting climate indices. This paper also identifies the strengths and weaknesses of each algorithm and how the generated estimations can then be considered in the decision-making process.

Details

Transforming Government: People, Process and Policy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 12 January 2024

Nasser Abdali, Saeideh Heidari, Mohammad Alipour-Vaezi, Fariborz Jolai and Amir Aghsami

Nowadays, in many organizations, products are not delivered instantly. So, the customers should wait to receive their needed products, which will form a queueing-inventory model…

Abstract

Purpose

Nowadays, in many organizations, products are not delivered instantly. So, the customers should wait to receive their needed products, which will form a queueing-inventory model. Waiting a long time in the queue to receive products may cause dissatisfaction and churn of loyal customers, which can be a significant loss for organizations. Although many studies have been done on queueing-inventory models, more practical models in this area are needed, such as considering customer prioritization. Moreover, in many models, minimizing the total cost for the organization has been overlooked.

Design/methodology/approach

This paper will compare several machine learning (ML) algorithms to prioritize customers. Moreover, benefiting from the best ML algorithm, customers will be categorized into different classes based on their value and importance. Finally, a mathematical model will be developed to determine the allocation policy of on-hand products to each group of customers through multi-channel service retailing to minimize the organization’s total costs and increase the loyal customers' satisfaction level.

Findings

To investigate the application of the proposed method, a real-life case study on vaccine distribution at Imam Khomeini Hospital in Tehran has been addressed to ensure model validation. The proposed model’s accuracy was assessed as excellent based on the results generated by the ML algorithms, problem modeling and case study.

Originality/value

Prioritizing customers based on their value with the help of ML algorithms and optimizing the waiting queues to reduce customers' waiting time based on a mathematical model could lead to an increase in satisfaction levels among loyal customers and prevent their churn. This study’s uniqueness lies in its focus on determining the policy in which customers receive products based on their value in the queue, which is a relatively rare topic of research in queueing management systems. Additionally, the results obtained from the study provide strong validation for the model’s functionality.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 April 2024

Melike Artar, Yavuz Selim Balcioglu and Oya Erdil

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of…

Abstract

Purpose

Our proposed machine learning model contributes to improving the quality of Hire by providing a more nuanced and comprehensive analysis of candidate attributes. Instead of focusing solely on obvious factors, such as qualifications and experience, our model also considers various dimensions of fit, including person-job fit and person-organization fit. By integrating these dimensions of fit into the model, we can better predict a candidate’s potential contribution to the organization, hence enhancing the Quality of Hire.

Design/methodology/approach

Within the scope of the investigation, the competencies of the personnel working in the IT department of one in the largest state banks of the country were used. The entire data collection includes information on 1,850 individual employees as well as 13 different characteristics. For analysis, Python’s “keras” and “seaborn” modules were used. The Gower coefficient was used to determine the distance between different records.

Findings

The K-NN method resulted in the formation of five clusters, represented as a scatter plot. The axis illustrates the cohesion that exists between things (employees) that are similar to one another and the separateness that exists between things that have their own individual identities. This shows that the clustering process is effective in improving both the degree of similarity within each cluster and the degree of dissimilarity between clusters.

Research limitations/implications

Employee competencies were evaluated within the scope of the investigation. Additionally, other criteria requested from the employee were not included in the application.

Originality/value

This study will be beneficial for academics, professionals, and researchers in their attempts to overcome the ongoing obstacles and challenges related to the securing the proper talent for an organization. In addition to creating a mechanism to use big data in the form of structured and unstructured data from multiple sources and deriving insights using ML algorithms, it contributes to the debates on the quality of hire in an entire organization. This is done in addition to developing a mechanism for using big data in the form of structured and unstructured data from multiple sources.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 10 November 2023

Yong Gui and Lanxin Zhang

Influenced by the constantly changing manufacturing environment, no single dispatching rule (SDR) can consistently obtain better scheduling results than other rules for the…

Abstract

Purpose

Influenced by the constantly changing manufacturing environment, no single dispatching rule (SDR) can consistently obtain better scheduling results than other rules for the dynamic job-shop scheduling problem (DJSP). Although the dynamic SDR selection classifier (DSSC) mined by traditional data-mining-based scheduling method has shown some improvement in comparison to an SDR, the enhancement is not significant since the rule selected by DSSC is still an SDR.

Design/methodology/approach

This paper presents a novel data-mining-based scheduling method for the DJSP with machine failure aiming at minimizing the makespan. Firstly, a scheduling priority relation model (SPRM) is constructed to determine the appropriate priority relation between two operations based on the production system state and the difference between their priority values calculated using multiple SDRs. Subsequently, a training sample acquisition mechanism based on the optimal scheduling schemes is proposed to acquire training samples for the SPRM. Furthermore, feature selection and machine learning are conducted using the genetic algorithm and extreme learning machine to mine the SPRM.

Findings

Results from numerical experiments demonstrate that the SPRM, mined by the proposed method, not only achieves better scheduling results in most manufacturing environments but also maintains a higher level of stability in diverse manufacturing environments than an SDR and the DSSC.

Originality/value

This paper constructs a SPRM and mines it based on data mining technologies to obtain better results than an SDR and the DSSC in various manufacturing environments.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 25 January 2022

Anil Kumar Maddali and Habibulla Khan

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance…

Abstract

Purpose

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance or more discreetly. The purpose of this study is to explore how voices and their analyses are used in modern literature to generate a variety of solutions, of which only a few successful models exist.

Design/methodology

The mel-frequency cepstral coefficient (MFCC), average magnitude difference function, cepstrum analysis and other voice characteristics are effectively modeled and implemented using mathematical modeling with variable weights parametric for each algorithm, which can be used with or without noises. Improvising the design characteristics and their weights with different supervised algorithms that regulate the design model simulation.

Findings

Different data models have been influenced by the parametric range and solution analysis in different space parameters, such as frequency or time model, with features such as without, with and after noise reduction. The frequency response of the current design can be analyzed through the Windowing techniques.

Original value

A new model and its implementation scenario with pervasive computational algorithms’ (PCA) (such as the hybrid PCA with AdaBoost (HPCA), PCA with bag of features and improved PCA with bag of features) relating the different features such as MFCC, power spectrum, pitch, Window techniques, etc. are calculated using the HPCA. The features are accumulated on the matrix formulations and govern the design feature comparison and its feature classification for improved performance parameters, as mentioned in the results.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 8 August 2023

Smita Abhijit Ganjare, Sunil M. Satao and Vaibhav Narwane

In today's fast developing era, the volume of data is increasing day by day. The traditional methods are lagging for efficiently managing the huge amount of data. The adoption of…

Abstract

Purpose

In today's fast developing era, the volume of data is increasing day by day. The traditional methods are lagging for efficiently managing the huge amount of data. The adoption of machine learning techniques helps in efficient management of data and draws relevant patterns from that data. The main aim of this research paper is to provide brief information about the proposed adoption of machine learning techniques in different sectors of manufacturing supply chain.

Design/methodology/approach

This research paper has done rigorous systematic literature review of adoption of machine learning techniques in manufacturing supply chain from year 2015 to 2023. Out of 511 papers, 74 papers are shortlisted for detailed analysis.

Findings

The papers are subcategorised into 8 sections which helps in scrutinizing the work done in manufacturing supply chain. This paper helps in finding out the contribution of application of machine learning techniques in manufacturing field mostly in automotive sector.

Practical implications

The research is limited to papers published from year 2015 to year 2023. The limitation of the current research that book chapters, unpublished work, white papers and conference papers are not considered for study. Only English language articles and review papers are studied in brief. This study helps in adoption of machine learning techniques in manufacturing supply chain.

Originality/value

This study is one of the few studies which investigate machine learning techniques in manufacturing sector and supply chain through systematic literature survey.

Highlights

  1. A comprehensive understanding of Machine Learning techniques is presented.

  2. The state of art of adoption of Machine Learning techniques are investigated.

  3. The methodology of (SLR) is proposed.

  4. An innovative study of Machine Learning techniques in manufacturing supply chain.

A comprehensive understanding of Machine Learning techniques is presented.

The state of art of adoption of Machine Learning techniques are investigated.

The methodology of (SLR) is proposed.

An innovative study of Machine Learning techniques in manufacturing supply chain.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 7 January 2020

Othmane Touri, Rida Ahroum and Boujemâa Achchab

The displaced commercial risk is one of the specific risks in the Islamic finance that creates a serious debate among practitioners and researchers about its management. The…

Abstract

Purpose

The displaced commercial risk is one of the specific risks in the Islamic finance that creates a serious debate among practitioners and researchers about its management. The purpose of this paper is to assess a new approach to manage this risk using machine learning algorithms.

Design/methodology/approach

To attempt this purpose, the authors use several machine learning algorithms applied to a set of financial data related to banks from different regions and consider the deposit variation intensity as an indicator.

Findings

Results show acceptable prediction accuracy. The model could be used to optimize the prudential reserves for banks and the incomes distributed to depositors.

Research limitations/implications

However, the model uses several variables as proxies since data are not available for some specific indicators, such as the profit equalization reserves and the investment risk reserves.

Originality/value

Previous studies have analyzed the origin and impact of DCR. To the best of authors’ knowledge, none of them has provided an ex ante management tool for this risk. Furthermore, the authors suggest the use of a new approach based on machine learning algorithms.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

1 – 10 of over 1000