Search results
1 – 10 of 65Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…
Abstract
Purpose
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.
Design/methodology/approach
This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).
Findings
The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.
Practical implications
The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.
Originality/value
This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details
Keywords
Xiaojie Xu and Yun Zhang
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction…
Abstract
Purpose
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.
Design/methodology/approach
In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?
Findings
The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.
Originality/value
The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.
Details
Keywords
The purpose of the study is to establish a predictive model for sustainable wire electrical discharge machining (WEDM) by using adaptive neuro fuzzy interface system (ANFIS)…
Abstract
Purpose
The purpose of the study is to establish a predictive model for sustainable wire electrical discharge machining (WEDM) by using adaptive neuro fuzzy interface system (ANFIS). Machining was done on Titanium grade 2 alloy, which is also nicknamed as workhorse of commercially pure titanium industry. ANFIS, being a state-of-the-art technology, is a highly sophisticated and reliable technique used for the prediction and decision-making.
Design/methodology/approach
Keeping in the mind the complex nature of WEDM along with the goal of sustainable manufacturing process, ANFIS was chosen to construct predictive models for the material removal rate (MRR) and power consumption (Pc), which reflect environmental and economic aspects. The machining parameters chosen for the machining process are pulse on-time, wire feed, wire tension, servo voltage, servo feed and peak current.
Findings
The ANFIS predicted values were verified experimentally, which gave a root mean squared error (RMSE) of 0.329 for MRR and 0.805 for Pc. The significantly low RMSE verifies the accuracy of the process.
Originality/value
ANFIS has been there for quite a time, but it has not been used yet for its possible application in the field of sustainable WEDM of titanium grade-2 alloy with emphasis on MRR and Pc. The novelty of the work is that a predictive model for sustainable machining of titanium grade-2 alloy has been successfully developed using ANFIS, thereby showing the reliability of this technique for the development of predictive models and decision-making for sustainable manufacturing.
Details
Keywords
Nirodha Fernando, Kasun Dilshan T.A. and Hexin (Johnson) Zhang
The Government’s investment in infrastructure projects is considerably high, especially in bridge construction projects. Government authorities must establish an initial…
Abstract
Purpose
The Government’s investment in infrastructure projects is considerably high, especially in bridge construction projects. Government authorities must establish an initial forecasted budget to have transparency in transactions. Early cost estimating is challenging for Quantity Surveyors due to incomplete project details at the initial stage and the unavailability of standard cost estimating techniques for bridge projects. To mitigate the difficulties in the traditional preliminary cost estimating methods, there is a requirement to develop a new initial cost estimating model which is accurate, user friendly and straightforward. The research was carried out in Sri Lanka, and this paper aims to develop the artificial neural network (ANN) model for an early cost estimate of concrete bridge systems.
Design/methodology/approach
The construction cost data of 30 concrete bridge projects which are in Sri Lanka constructed within the past ten years were trained and tested to develop an ANN cost model. Backpropagation technique was used to identify the number of hidden layers, iteration and momentum for optimum neural network architectures.
Findings
An ANN cost model was developed, furnishing the best result since it succeeded with around 90% validation accuracy. It created a cost estimation model for the public sector as an accurate, heuristic, flexible and efficient technique.
Originality/value
The research contributes to the current body of knowledge by providing the most accurate early-stage cost estimate for the concrete bridge systems in Sri Lanka. In addition, the research findings would be helpful for stakeholders and policymakers to propose policy recommendations that positively influence the prediction of the most accurate cost estimate for concrete bridge construction projects in Sri Lanka and other developing countries.
Details
Keywords
Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…
Abstract
Purpose
The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.
Design/methodology/approach
In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.
Findings
The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.
Originality/value
The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.
Details
Keywords
Ngoc Tuan Chau, Hepu Deng and Richard Tay
Understanding the adoption of m-commerce in small and medium-sized enterprises (SMEs) is critical for their sustainable development. This study aims to investigate the adoption of…
Abstract
Purpose
Understanding the adoption of m-commerce in small and medium-sized enterprises (SMEs) is critical for their sustainable development. This study aims to investigate the adoption of m-commerce in Vietnamese SMEs, leading to the identification of the critical determinants and their relative importance for m-commerce adoption.
Design/methodology/approach
An integrated model is developed by combining the diffusion of innovation theory and the technology–organization–environment framework. Such a model is then tested and validated using structural equation modeling and artificial neural networks in analyzing the survey data.
Findings
The study indicates that perceived security is the most critical determinant for m-commerce adoption. It further shows that customer pressure, perceived compatibility, organizational innovativeness, perceived benefits, managers’ IT knowledge, government support and organizational readiness all play a critical role in the adoption of m-commerce in Vietnamese SMEs.
Practical implications
The findings of this study can lead to the formulation of better strategies and policies for promoting the adoption of m-commerce in Vietnamese SMEs. Such findings are also of practical significance for the diffusion of m-commerce in SMEs in other developing countries.
Originality/value
To the best of the authors’ knowledge, this is the first attempt to explore the adoption of m-commerce in Vietnamese SMEs using a hybrid approach. The application of this approach can lead to better understanding of the relative importance of the critical determinants for the adoption of m-commerce in Vietnamese SMEs.
Details
Keywords
Mohammed Ayoub Ledhem and Warda Moussaoui
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…
Abstract
Purpose
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.
Design/methodology/approach
This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.
Findings
The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.
Practical implications
This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.
Originality/value
This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.
Details
Keywords
João Eduardo Sampaio Brasil, Fabio Antonio Sartori Piran, Daniel Pacheco Lacerda, Maria Isabel Wolf Morandi, Debora Oliveira da Silva and Miguel Afonso Sellitto
The purpose of this study is to evaluate the efficiency of a Brazilian steelmaking company’s reheating process of the hot rolling mill.
Abstract
Purpose
The purpose of this study is to evaluate the efficiency of a Brazilian steelmaking company’s reheating process of the hot rolling mill.
Design/methodology/approach
The research method is a quantitative modeling. The main research techniques are data envelopment analysis, TOBIT regression and simulation supported by artificial neural networks. The model’s input and output variables consist of the average billet weight, number of billets processed in a batch, gas consumption, thermal efficiency, backlog and production yield within a specific period. The analysis spans 20 months.
Findings
The key findings include an average current efficiency of 81%, identification of influential variables (average billet weight, billet count and gas consumption) and simulated analysis. Among the simulated scenarios, the most promising achieved an average efficiency of 95% through increased equipment availability and billet size.
Practical implications
Additional favorable simulated scenarios entail the utilization of higher pre-reheating temperatures for cold billets, representing a large amount of savings in gas consumption and a reduction in CO2 emissions.
Originality/value
This study’s primary innovation lies in providing steelmaking practitioners with a systematic approach to evaluating and enhancing the efficiency of reheating processes.
Details
Keywords
Thembekile Debora Sepeng, Ann Lourens, Karl Van der Merwe and Robert Gerber
The purpose of this paper is to show that third-party quality audits (TPQAs) facilitate performance improvement and give confidence to organisations concerning the process quality…
Abstract
Purpose
The purpose of this paper is to show that third-party quality audits (TPQAs) facilitate performance improvement and give confidence to organisations concerning the process quality of services and products. However, because of inconsistencies and unethical practices often observed in the industry, organisations question the significance of TPQA. A perception exists that its initial purpose as an impartial tool ensuring quality of deliverables is no longer upheld. Hence, the need to determine and explain the influence of the ISO 19011 standard interpretation on the application of the audit guidelines in performing TPQA, to promote consistency in the audit process.
Design/methodology/approach
The study employed document analysis of the ISO 19011 standard, followed by semi-structured interviews with certification managers (CBs) to gain insight related to their interpretation and application of the ISO 19011 guidelines.
Findings
The CBs interpret the ISO 19011 guidelines differently; hence, their application of the standard to compile their audit documents differ. Adherence to the principles of auditing particularly, integrity and independence were found as the core of the audit process while their disregard reflects failure of the real intent of auditing. The inconsistencies in the audit procedures and documents developed for auditors are ascribed to some CBs’ personal interpretations.
Originality/value
The study explores how the different interpretations of the ISO 19011 standard prevail and are perceived by the CBs and auditors. The findings aim to support standardisation and reduce the variations across and amongst the different CBs and auditors.
Details