Search results

1 – 10 of 320
Article
Publication date: 11 October 2023

Chinthaka Niroshan Atapattu, Niluka Domingo and Monty Sutrisna

Cost overrun in infrastructure projects is a constant concern, with a need for a proper solution. The current estimation practice needs improvement to reduce cost overruns. This…

Abstract

Purpose

Cost overrun in infrastructure projects is a constant concern, with a need for a proper solution. The current estimation practice needs improvement to reduce cost overruns. This study aimed to find possible statistical modelling techniques that could be used to develop cost models to produce more reliable cost estimates.

Design/methodology/approach

A bibliographic literature review was conducted using a two-stage selection method to compile the relevant publications from Scopus. Then, Visualisation of Similarities (VOS)-Viewer was used to develop the visualisation maps for co-occurrence keyword analysis and yearly trends in research topics.

Findings

The study found seven primary techniques used as cost models in construction projects: regression analysis (RA), artificial neural network (ANN), case-based reasoning (CBR), fuzzy logic, Monte-Carlo simulation (MCS), support vector machine (SVM) and reference class forecasting (RCF). RA, ANN and CBR were the most researched techniques. Furthermore, it was observed that the model's performance could be improved by combining two or more techniques into one model.

Research limitations/implications

The research was limited to the findings from the bibliometric literature review.

Practical implications

The findings provided an assessment of statistical techniques that the industry can adopt to improve the traditional estimation practice of infrastructure projects.

Originality/value

This study mapped the research carried out on cost-modelling techniques and analysed the trends. It also reviewed the performance of the models developed for infrastructure projects. The findings could be used to further research to develop more reliable cost models using statistical modelling techniques with better performance.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 18 August 2023

Gaurav Sarin, Pradeep Kumar and M. Mukund

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological…

Abstract

Purpose

Text classification is a widely accepted and adopted technique in organizations to mine and analyze unstructured and semi-structured data. With advancement of technological computing, deep learning has become more popular among academicians and professionals to perform mining and analytical operations. In this work, the authors study the research carried out in field of text classification using deep learning techniques to identify gaps and opportunities for doing research.

Design/methodology/approach

The authors adopted bibliometric-based approach in conjunction with visualization techniques to uncover new insights and findings. The authors collected data of two decades from Scopus global database to perform this study. The authors discuss business applications of deep learning techniques for text classification.

Findings

The study provides overview of various publication sources in field of text classification and deep learning together. The study also presents list of prominent authors and their countries working in this field. The authors also presented list of most cited articles based on citations and country of research. Various visualization techniques such as word cloud, network diagram and thematic map were used to identify collaboration network.

Originality/value

The study performed in this paper helped to understand research gaps that is original contribution to body of literature. To best of the authors' knowledge, in-depth study in the field of text classification and deep learning has not been performed in detail. The study provides high value to scholars and professionals by providing them opportunities of research in this area.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 13 June 2023

G. Deepa, A.J. Niranjana and A.S. Balu

This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure…

Abstract

Purpose

This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure a project within a predefined budget. However, most of the projects routinely face the impact of cost overruns. Furthermore, conventional and manual cost computing techniques are hectic, time-consuming and error-prone. To deal with such challenges, soft computing techniques such as artificial neural networks (ANNs), fuzzy logic and genetic algorithms are applied in construction management. Each technique has its own constraints not only in terms of efficiency but also in terms of feasibility, practicability, reliability and environmental impacts. However, appropriate combination of the techniques improves the model owing to their inherent nature.

Design/methodology/approach

This paper proposes a hybrid model by combining machine learning (ML) techniques with ANN to accurately predict the cost of pile foundations. The parameters contributing toward the cost of pile foundations were collected from five different projects in India. Out of 180 collected data entries, 176 entries were finally used after data cleaning. About 70% of the final data were used for building the model and the remaining 30% were used for validation.

Findings

The proposed model is capable of predicting the pile foundation costs with an accuracy of 97.42%.

Originality/value

Although various cost estimation techniques are available, appropriate use and combination of various ML techniques aid in improving the prediction accuracy. The proposed model will be a value addition to cost estimation of pile foundations.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 15 February 2024

Williams E. Nwagwu

This study was carried out to examine the volume and annual growth pattern of research on e-health literacy research, investigate the open-access types of e-health literacy…

Abstract

Purpose

This study was carried out to examine the volume and annual growth pattern of research on e-health literacy research, investigate the open-access types of e-health literacy research and perform document production by country and by sources. The study also mapped the keywords used by authors to represent e-health literacy research and performed an analysis of the clusters of the keywords to reveal the thematic focus of research in the area.

Design/methodology/approach

The research was guided by a bibliometric approach involving visualization using VosViewer. Data were sourced from Scopus database using a syntax that was tested and verified to be capable of yielding reliable data on the subject matter. The analysis in this study was based on bibliographic data and keywords.

Findings

A total number of 1,176 documents were produced during 2006 and 2022. The majority of the documents (18.90%) were published based on hybrid open-access processes, and the USA has the highest contributions. The Journal of Medical Internet Research is the venue for most of the documents on the subject. The 1,176 documents were described by 5,047 keywords, 4.29 keywords per document, and the keywords were classified into five clusters that aptly capture the thematic structure of research in the area.

Research limitations/implications

e-Health literacy has experienced significant growth in research production from 2006 to 2022, with an average of 69 documents per year. Research on e-health literacy initially had low output but began to increase in 2018. The majority of e-health literacy documents are available through open access, with the USA being the leading contributor. The analysis of keywords reveals the multifaceted nature of e-health literacy, including access to information, attitudes, measurement tools, awareness, age factors and communication. Clusters of keywords highlight different aspects of e-health literacy research, such as accessibility, attitudes, awareness, measurement tools and the importance of age, cancer, caregivers and effective communication in healthcare.

Practical implications

This study has practical implications for health promotion. There is also the element of patient empowerment in which case patients are allowed to take an active role in their healthcare. By understanding their health information and having access to resources that help them manage their conditions, patients can make informed decisions about their healthcare. Finally, there is the issue of improved health outcomes which can be achieved by improving patients' e-health literacy. Visualisation of e-health literacy can help bridge the gap between patients and healthcare providers, promote patient-centered care and improve health outcomes.

Originality/value

Research production on e-Health literacy has experienced significant growth from 2006 to 2022, with an average of 69 documents per year. Many e-health literacy documents are available through open access, and the USA is the leading contributor. The analysis of keywords reveals the nature of e-health literacy, including access to information, attitudes, measurement tools, awareness and communication. The clusters of keywords highlight different aspects of e-health literacy research, such as accessibility, attitudes, awareness, measurement tools and the importance of age, cancer, caregivers, and effective communication in healthcare.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 November 2023

Felix Preshanth Santhiapillai and R.M. Chandima Ratnayake

The purpose of this study is to investigate the integrated application of business process modeling and notation (BPMN) and value stream mapping (VSM) to improve knowledge work…

Abstract

Purpose

The purpose of this study is to investigate the integrated application of business process modeling and notation (BPMN) and value stream mapping (VSM) to improve knowledge work performance and productivity in police services. In order to explore the application of the hybrid BPMN-VSM approach in police services, this study uses the department of digital crime investigation (DCI) in one Norwegian police district as a case study.

Design/methodology/approach

Service process identification was the next step after selecting an appropriate organizational unit for the case study. BPMN-VSM-based current state mapping, including time and waste analyses, was used to determine cycle and lead time and identify value-adding and nonvalue-adding activities. Subsequently, improvement opportunities were identified, and the current state process was re-designed and constructed through future state mapping.

Findings

The study results indicate a 44.4% and 83.0% reduction in process cycle and lead time, respectively. This promising result suggests that the hybrid BPMN-VSM approach can support the visualization of bottlenecks and possible causes of increased lead times, followed by the systematic identification and proposals of avenues for future improvement and innovation to remedy the discovered inefficiencies in a complex knowledge-work environment.

Research limitations/implications

This study focused on one department in a Norwegian police district. However, the experience gained can support researchers and practitioners in understanding lean implementation through an integrated BPMN and VSM model, offering a unique insight into the ability to investigate complex systems.

Originality/value

Complex knowledge work processes generally characterize police services due to a high number of activities, resources and stakeholder involvement. Implementing lean thinking in this context is significantly challenging, and the literature on this topic is limited. This study addresses the applicability of the hybrid BPMN-VSM approach in police services with an original public sector case study in Norway.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 November 2023

Salam Abdallah and Ashraf Khalil

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two…

121

Abstract

Purpose

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two techniques – text mining and manual review. The proposed methodology would aid researchers in identifying key concepts and research gaps, which in turn, will help them to establish the theoretical background supporting their empirical research objective.

Design/methodology/approach

This paper explores a hybrid methodology for literature review (HMLR), using text mining prior to systematic manual review.

Findings

The proposed rapid methodology is an effective tool to automate and speed up the process required to identify key and emerging concepts and research gaps in any specific research domain while conducting a systematic literature review. It assists in populating a research knowledge graph that does not reach all semantic depths of the examined domain yet provides some science-specific structure.

Originality/value

This study presents a new methodology for conducting a literature review for empirical research articles. This study has explored an “HMLR” that combines text mining and manual systematic literature review. Depending on the purpose of the research, these two techniques can be used in tandem to undertake a comprehensive literature review, by combining pieces of complex textual data together and revealing areas where research might be lacking.

Details

Information Discovery and Delivery, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 15 June 2023

Amna Salman and Wasiq Ahmad

The Operations and Maintenance (O&M) cost of a facility is typically 60–85% of the total life cycle cost of a building whereas its design and construction cost accounts for only…

Abstract

Purpose

The Operations and Maintenance (O&M) cost of a facility is typically 60–85% of the total life cycle cost of a building whereas its design and construction cost accounts for only 5–10%. Therefore, enhancing and optimizing the O&M of a facility is a crucial issue. In addition, with the increasing complexities in a building's operating systems, more technologically advanced solutions are required for proactively maintaining a facility. Thereby, a tool is needed which can optimize and reduce the cost of facility maintenance. One of the solutions is Augmented or Mixed Reality (AR/MR) technologies which can reduce repair time, training time and streamline inspections. Therefore, the purpose of this study is to establish contextual knowledge of AR/MR application in facilities operation and maintenance and present an implementation framework through the analysis and classification of articles published between 2015 and 2022.

Design/methodology/approach

To effectively understand all AR/MR applications in facilities management (FM), a systematic literature review is performed. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) protocol was followed for searching and describing the search strategies. Keywords were identified through the concept mapping technique. The Scopus database and Google Scholar were employed to find relevant articles, books and conference papers. A thorough bibliometric analysis was conducted using VOS Viewer and subsequently, a thematic analysis was performed for the selected publications.

Findings

The use of AR/MR within facilities O&M could be categorized into five different application areas: (1) visualization; (2) maintenance; (3) indoor localization and positioning; (4) information management and (5) indoor environment. After a thematic analysis of the literature, it was found that maintenance and indoor localization were the most frequently used research application domains. The chronological evolution of AR/MR in FM is also presented along with the origin of publications, which showed that the technology is out of its infancy stage and is ready for implementation. However, literature showed many challenges hindering this goal, that is (1) reluctance of the organizational leadership to bear the cost of hardware and trainings for the employees, (2) Lack of BIM use in FM and (3) system lagging, crashing and unable to register the real environment. A preliminary framework is presented to overcome these challenges.

Originality/value

This study accommodates a variety of application domains within facilities O&M. The publications were systematically selected from the existing literature and then reviewed to exhibit various AR/MR applications to support FM. There have been no literature reviews that focus on AR and/or MR in the FM and this paper fills the gap by not only presenting its applications but also developing an implementation framework.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 7 March 2024

Manpreet Kaur, Amit Kumar and Anil Kumar Mittal

In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered…

Abstract

Purpose

In past decades, artificial neural network (ANN) models have revolutionised various stock market operations due to their superior ability to deal with nonlinear data and garnered considerable attention from researchers worldwide. The present study aims to synthesize the research field concerning ANN applications in the stock market to a) systematically map the research trends, key contributors, scientific collaborations, and knowledge structure, and b) uncover the challenges and future research areas in the field.

Design/methodology/approach

To provide a comprehensive appraisal of the extant literature, the study adopted the mixed approach of quantitative (bibliometric analysis) and qualitative (intensive review of influential articles) assessment to analyse 1,483 articles published in the Scopus and Web of Science indexed journals during 1992–2022. The bibliographic data was processed and analysed using VOSviewer and R software.

Findings

The results revealed the proliferation of articles since 2018, with China as the dominant country, Wang J as the most prolific author, “Expert Systems with Applications” as the leading journal, “computer science” as the dominant subject area, and “stock price forecasting” as the predominantly explored research theme in the field. Furthermore, “portfolio optimization”, “sentiment analysis”, “algorithmic trading”, and “crisis prediction” are found as recently emerged research areas.

Originality/value

To the best of the authors’ knowledge, the current study is a novel attempt that holistically assesses the existing literature on ANN applications throughout the entire domain of stock market. The main contribution of the current study lies in discussing the challenges along with the viable methodological solutions and providing application area-wise knowledge gaps for future studies.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 28 March 2024

Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…

Abstract

Purpose

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”

Design/methodology/approach

The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.

Findings

This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.

Originality/value

This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

1 – 10 of 320