Search results
1 – 6 of 6Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…
Abstract
Purpose
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.
Design/methodology/approach
This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).
Findings
The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.
Practical implications
The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.
Originality/value
This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.
Tulsi Pawan Fowdur and Ashven Sanghan
The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical…
Abstract
Purpose
The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical appliance and transfer it securely to a local server for energy analytics such as forecasting.
Design/methodology/approach
The data capture system is composed of two current transformer (CT) sensors connected to two different electrical appliances. The CT sensors send the power readings to two Arduino microcontrollers which in turn connect to a Raspberry-Pi for aggregating the data. Blockchain is then enabled onto the Raspberry-Pi through a Java API so that the data are transmitted securely to a server. The server provides real-time visualization of the data as well as prediction using the multi-layer perceptron (MLP) and long short term memory (LSTM) algorithms.
Findings
The results for the blockchain analysis demonstrate that when the data readings are transmitted in smaller blocks, the security is much greater as compared with blocks of larger size. To assess the accuracy of the prediction algorithms data were collected for a 20 min interval to train the model and the algorithms were evaluated using the sliding window approach. The mean average percentage error (MAPE) was used to assess the accuracy of the algorithms and a MAPE of 1.62% and 1.99% was obtained for the LSTM and MLP algorithms, respectively.
Originality/value
A detailed performance analysis of the blockchain-based transmission model using time complexity, throughput and latency as well as energy forecasting has been performed.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details
Keywords
Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Abstract
Purpose
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Design/methodology/approach
A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.
Findings
The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.
Practical implications
The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.
Originality/value
The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?
Details
Keywords
This paper aims to explore the relationship between market pricing and design quality within the development industry. Currently, there is a lack of research that examines real…
Abstract
Purpose
This paper aims to explore the relationship between market pricing and design quality within the development industry. Currently, there is a lack of research that examines real estate at the property level. Development quality is widely believed to have diminished over the past decades, while many investors seem uninterested in the design process. The study aims to address these issues through a pricing model that integrates design attributes. It is hoped that empirical findings will invite broader stakeholder interest in the design process.
Design/methodology/approach
The research establishes a framework for assessing spatial compliance across residential developments within London. Compliance is assessed across ten boroughs, with technical space guidelines used as a proxy for design quality. Transaction prices and spatial assessments are aligned within a hedonic pricing model. Empirical findings are used to establish whether undermining spatial standards presents a significant development risk.
Findings
Findings suggest a relationship between sale time and unit size, with “compliant” units typically transacting earlier than “non-compliant” units. Almost half of the 1,600 apartments surveyed appear to undermine technical guidelines.
Research limitations/implications
It is suggested that an array of design attributes be explored that extend beyond unit size. Additionally, future studies may consider the long-term implications of design quality via secondary transaction prices.
Practical implications
Practical implications include the development of a more scientific approach to design valuation. This may enhance the position of product design management within the development industry and architectural services.
Social implications
Social implications may include improvement in residential design.
Originality/value
An innovative approach combines a thorough understanding of both design and economic principles.
Details
Keywords
Mohammed Ayoub Ledhem and Warda Moussaoui
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…
Abstract
Purpose
This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.
Design/methodology/approach
This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.
Findings
The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.
Practical implications
This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.
Originality/value
This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.
Details