Search results

1 – 6 of 6
Article
Publication date: 25 April 2024

Abdul-Manan Sadick, Argaw Gurmu and Chathuri Gunarathna

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is…

Abstract

Purpose

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is qualitative, posing additional challenges to achieving accurate cost estimates. Additionally, there is a lack of tools that use qualitative project information and forecast the budgets required for project completion. This research, therefore, aims to develop a model for setting project budgets (excluding land) during the pre-conceptual stage of residential buildings, where project information is mainly qualitative.

Design/methodology/approach

Due to the qualitative nature of project information at the pre-conception stage, a natural language processing model, DistilBERT (Distilled Bidirectional Encoder Representations from Transformers), was trained to predict the cost range of residential buildings at the pre-conception stage. The training and evaluation data included 63,899 building permit activity records (2021–2022) from the Victorian State Building Authority, Australia. The input data comprised the project description of each record, which included project location and basic material types (floor, frame, roofing, and external wall).

Findings

This research designed a novel tool for predicting the project budget based on preliminary project information. The model achieved 79% accuracy in classifying residential buildings into three cost_classes ($100,000-$300,000, $300,000-$500,000, $500,000-$1,200,000) and F1-scores of 0.85, 0.73, and 0.74, respectively. Additionally, the results show that the model learnt the contextual relationship between qualitative data like project location and cost.

Research limitations/implications

The current model was developed using data from Victoria state in Australia; hence, it would not return relevant outcomes for other contexts. However, future studies can adopt the methods to develop similar models for their context.

Originality/value

This research is the first to leverage a deep learning model, DistilBERT, for cost estimation at the pre-conception stage using basic project information like location and material types. Therefore, the model would contribute to overcoming data limitations for cost estimation at the pre-conception stage. Residential building stakeholders, like clients, designers, and estimators, can use the model to forecast the project budget at the pre-conception stage to facilitate decision-making.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 26 September 2022

Christian Nnaemeka Egwim, Hafiz Alaka, Oluwapelumi Oluwaseun Egunjobi, Alvaro Gomes and Iosif Mporas

This study aims to compare and evaluate the application of commonly used machine learning (ML) algorithms used to develop models for assessing energy efficiency of buildings.

Abstract

Purpose

This study aims to compare and evaluate the application of commonly used machine learning (ML) algorithms used to develop models for assessing energy efficiency of buildings.

Design/methodology/approach

This study foremostly combined building energy efficiency ratings from several data sources and used them to create predictive models using a variety of ML methods. Secondly, to test the hypothesis of ensemble techniques, this study designed a hybrid stacking ensemble approach based on the best performing bagging and boosting ensemble methods generated from its predictive analytics.

Findings

Based on performance evaluation metrics scores, the extra trees model was shown to be the best predictive model. More importantly, this study demonstrated that the cumulative result of ensemble ML algorithms is usually always better in terms of predicted accuracy than a single method. Finally, it was discovered that stacking is a superior ensemble approach for analysing building energy efficiency than bagging and boosting.

Research limitations/implications

While the proposed contemporary method of analysis is assumed to be applicable in assessing energy efficiency of buildings within the sector, the unique data transformation used in this study may not, as typical of any data driven model, be transferable to the data from other regions other than the UK.

Practical implications

This study aids in the initial selection of appropriate and high-performing ML algorithms for future analysis. This study also assists building managers, residents, government agencies and other stakeholders in better understanding contributing factors and making better decisions about building energy performance. Furthermore, this study will assist the general public in proactively identifying buildings with high energy demands, potentially lowering energy costs by promoting avoidance behaviour and assisting government agencies in making informed decisions about energy tariffs when this novel model is integrated into an energy monitoring system.

Originality/value

This study fills a gap in the lack of a reason for selecting appropriate ML algorithms for assessing building energy efficiency. More importantly, this study demonstrated that the cumulative result of ensemble ML algorithms is usually always better in terms of predicted accuracy than a single method.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 11 March 2024

Jianjun Yao and Yingzhao Li

Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios…

Abstract

Purpose

Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios such as illumination change, rapid rotation and large angle of view variation. In contrast, learning-based keypoints exhibit higher repetition but entail considerable computational costs. This paper proposes an innovative algorithm for keypoint extraction, aiming to strike an equilibrium between precision and efficiency. This paper aims to attain accurate, robust and versatile visual localization in scenes of formidable complexity.

Design/methodology/approach

SiLK-SLAM initially refines the cutting-edge learning-based extractor, SiLK, and introduces an innovative postprocessing algorithm for keypoint homogenization and operational efficiency. Furthermore, SiLK-SLAM devises a reliable relocalization strategy called PCPnP, leveraging progressive and consistent sampling, thereby bolstering its robustness.

Findings

Empirical evaluations conducted on TUM, KITTI and EuRoC data sets substantiate SiLK-SLAM’s superior localization accuracy compared to ORB-SLAM3 and other methods. Compared to ORB-SLAM3, SiLK-SLAM demonstrates an enhancement in localization accuracy even by 70.99%, 87.20% and 85.27% across the three data sets. The relocalization experiments demonstrate SiLK-SLAM’s capability in producing precise and repeatable keypoints, showcasing its robustness in challenging environments.

Originality/value

The SiLK-SLAM achieves exceedingly elevated localization accuracy and resilience in formidable scenarios, holding paramount importance in enhancing the autonomy of robots navigating intricate environments. Code is available at https://github.com/Pepper-FlavoredChewingGum/SiLK-SLAM.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 May 2024

Bikesh Manandhar, Thanh-Canh Huynh, Pawan Kumar Bhattarai, Suchita Shrestha and Ananta Man Singh Pradhan

This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs)…

Abstract

Purpose

This research is aimed at preparing landslide susceptibility using spatial analysis and soft computing machine learning techniques based on convolutional neural networks (CNNs), artificial neural networks (ANNs) and logistic regression (LR) models.

Design/methodology/approach

Using the Geographical Information System (GIS), a spatial database including topographic, hydrologic, geological and landuse data is created for the study area. The data are randomly divided between a training set (70%), a validation (10%) and a test set (20%).

Findings

The validation findings demonstrate that the CNN model (has an 89% success rate and an 84% prediction rate). The ANN model (with an 84% success rate and an 81% prediction rate) predicts landslides better than the LR model (with a success rate of 82% and a prediction rate of 79%). In comparison, the CNN proves to be more accurate than the logistic regression and is utilized for final susceptibility.

Research limitations/implications

Land cover data and geological data are limited in largescale, making it challenging to develop accurate and comprehensive susceptibility maps.

Practical implications

It helps to identify areas with a higher likelihood of experiencing landslides. This information is crucial for assessing the risk posed to human lives, infrastructure and properties in these areas. It allows authorities and stakeholders to prioritize risk management efforts and allocate resources more effectively.

Social implications

The social implications of a landslide susceptibility map are profound, as it provides vital information for disaster preparedness, risk mitigation and landuse planning. Communities can utilize these maps to identify vulnerable areas, implement zoning regulations and develop evacuation plans, ultimately safeguarding lives and property. Additionally, access to such information promotes public awareness and education about landslide risks, fostering a proactive approach to disaster management. However, reliance solely on these maps may also create a false sense of security, necessitating continuous updates and integration with other risk assessment measures to ensure effective disaster resilience strategies are in place.

Originality/value

Landslide susceptibility mapping provides a proactive approach to identifying areas at higher risk of landslides before any significant events occur. Researchers continually explore new data sources, modeling techniques and validation approaches, leading to a better understanding of landslide dynamics and susceptibility factors.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 12 September 2023

Zengli Mao and Chong Wu

Because the dynamic characteristics of the stock market are nonlinear, it is unclear whether stock prices can be predicted. This paper aims to explore the predictability of the…

Abstract

Purpose

Because the dynamic characteristics of the stock market are nonlinear, it is unclear whether stock prices can be predicted. This paper aims to explore the predictability of the stock price index from a long-memory perspective. The authors propose hybrid models to predict the next-day closing price index and explore the policy effects behind stock prices. The paper aims to discuss the aforementioned ideas.

Design/methodology/approach

The authors found a long memory in the stock price index series using modified R/S and GPH tests, and propose an improved bi-directional gated recurrent units (BiGRU) hybrid network framework to predict the next-day stock price index. The proposed framework integrates (1) A de-noising module—Singular Spectrum Analysis (SSA) algorithm, (2) a predictive module—BiGRU model, and (3) an optimization module—Grid Search Cross-validation (GSCV) algorithm.

Findings

Three critical findings are long memory, fit effectiveness and model optimization. There is long memory (predictability) in the stock price index series. The proposed framework yields predictions of optimum fit. Data de-noising and parameter optimization can improve the model fit.

Practical implications

The empirical data are obtained from the financial data of listed companies in the Wind Financial Terminal. The model can accurately predict stock price index series, guide investors to make reasonable investment decisions, and provide a basis for establishing individual industry stock investment strategies.

Social implications

If the index series in the stock market exhibits long-memory characteristics, the policy implication is that fractal markets, even in the nonlinear case, allow for a corresponding distribution pattern in the value of portfolio assets. The risk of stock price volatility in various sectors has expanded due to the effects of the COVID-19 pandemic and the R-U conflict on the stock market. Predicting future trends by forecasting stock prices is critical for minimizing financial risk. The ability to mitigate the epidemic’s impact and stop losses promptly is relevant to market regulators, companies and other relevant stakeholders.

Originality/value

Although long memory exists, the stock price index series can be predicted. However, price fluctuations are unstable and chaotic, and traditional mathematical and statistical methods cannot provide precise predictions. The network framework proposed in this paper has robust horizontal connections between units, strong memory capability and stronger generalization ability than traditional network structures. The authors demonstrate significant performance improvements of SSA-BiGRU-GSCV over comparison models on Chinese stocks.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 December 2023

Chen Xuemeng and Ma Guangqi

The manufacturing industry and the producer service industry have a high degree of industrial correlation, and their integration will cause changes in the complex industrial…

Abstract

Purpose

The manufacturing industry and the producer service industry have a high degree of industrial correlation, and their integration will cause changes in the complex industrial network topology, which is an important reason for the synergistic effect. This paper describes the topology of industrial systems using complex network theory; further, it discusses how to identify the criticality and importance of industrial nodes, and whether node characteristics cause synergistic effects.

Design/methodology/approach

Based on the input-output data of China in 2007, 2012 and 2017, this paper constructs the industrial complex network of 30 Chinese provinces and cities, and measures the regional network characteristics of the manufacturing industry. The fixed-effect panel regression model is adopted to test the influence of agglomeration degree and centrality on synergies, and its adjustment mechanism is explored.

Findings

The degree of network agglomeration in the manufacturing industry exerts a negative impact on the synergistic effect, while the centrality of the network exerts a significant promoting effect on the synergistic effect. The results of adjustment mechanism test show that enhancing the autonomous controllable ability of the regional industrial chain in the manufacturing industry can effectively reduce the effect of network characteristics on the synergistic effect.

Research limitations/implications

Based on input-output technology, this paper constructs a complex industrial network model, however, only basic flow data are used. Considerable in-depth and detailed research on the economic and technological connections within the industry should be conducted in the future. The selection of the evaluation index of the importance of industrial nodes also needs to be further considered. For historical reasons, it is also difficult to obtain and process data when carrying out quantitative analysis; therefore, it is necessary to make further attempts from the data source and the expression form of evaluation indicators.

Practical implications

In a practical sense this has certain reference value for the formulation of manufacturing industrial policies the optimization of regional industrial layout and the improvement of the industrial development level. It is necessary to formulate targeted and specialized industrial development strategies according to the characteristics of the manufacturing industry appropriately regulate the autonomous controllable ability of the industrial chain and avoid to limit the development of industries which is in turn limited by regional resources. Industry competition and market congestion need to be reduced industry exchanges outside the region encouraged the industrial layout optimized and the construction of a modern industrial system accelerated.

Social implications

The above research results hold certain reference importance for policy formulation related to the manufacturing industry, regional industrial layout optimization and industrial development level improvement. Targeted specialized industrial development strategies need to be formulated according to the characteristics of the manufacturing industry; the autonomous controllability of the industrial chain needs to be appropriately regulated; limitation of regional resources needs to be avoided as this restricts industrial development; and industry competition and market congestion need to be reduced. Agglomeration of production factors and optimization of resource allocation is an important part of a beneficial regional economic development strategy, and it is also an inevitable choice for industrialization to develop to a certain stage under the condition of a market economy. In alignment with the research conclusions, effective suggestions can be put forward for the current major industrial policies. In the process of promoting the development of the manufacturing industry, it is necessary for regional governments to carry out unified planning and guidance on the spatial layout of each manufacturing subsector. Regional governments need to effectively allocate inter-industry resources, better share economies of scale, constantly enhance the competitive advantages and competitiveness of development zones and new districts and promote the coordinated agglomeration and development of related industries with input industries. Industrial exchanges outside the region should be encouraged, the industrial layout should be optimized and the construction of a modern industrial system should be accelerated.

Originality/value

Complex network theory is introduced to study the industrial synergy effect. A complex industrial network of China's 30 regions is built and key network nodes are measured. Based on the dimensionality of the “industrial node – industrial chain – industrial complex network”, the research path of industrial complex networks is improved.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 6 of 6