Search results
1 – 10 of 236Muhammad Zahir Khan and Muhammad Farid Khan
A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical…
Abstract
Purpose
A significant number of studies have been conducted to analyze and understand the relationship between gas emissions and global temperature using conventional statistical approaches. However, these techniques follow assumptions of probabilistic modeling, where results can be associated with large errors. Furthermore, such traditional techniques cannot be applied to imprecise data. The purpose of this paper is to avoid strict assumptions when studying the complex relationships between variables by using the three innovative, up-to-date, statistical modeling tools: adaptive neuro-fuzzy inference systems (ANFIS), artificial neural networks (ANNs) and fuzzy time series models.
Design/methodology/approach
These three approaches enabled us to effectively represent the relationship between global carbon dioxide (CO2) emissions from the energy sector (oil, gas and coal) and the average global temperature increase. Temperature was used in this study (1900-2012). Investigations were conducted into the predictive power and performance of different fuzzy techniques against conventional methods and among the fuzzy techniques themselves.
Findings
A performance comparison of the ANFIS model against conventional techniques showed that the root means square error (RMSE) of ANFIS and conventional techniques were found to be 0.1157 and 0.1915, respectively. On the other hand, the correlation coefficients of ANN and the conventional technique were computed to be 0.93 and 0.69, respectively. Furthermore, the fuzzy-based time series analysis of CO2 emissions and average global temperature using three fuzzy time series modeling techniques (Singh, Abbasov–Mamedova and NFTS) showed that the RMSE of fuzzy and conventional time series models were 110.51 and 1237.10, respectively.
Social implications
The paper provides more awareness about fuzzy techniques application in CO2 emissions studies.
Originality/value
These techniques can be extended to other models to assess the impact of CO2 emission from other sectors.
Details
Keywords
Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…
Abstract
Purpose
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.
Design/methodology/approach
This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).
Findings
The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.
Practical implications
The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.
Originality/value
This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details
Keywords
Hussein Y.H. Alnajjar and Osman Üçüncü
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks (ANNs…
Abstract
Purpose
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks (ANNs) are one of the most important of these models, and they are increasingly being used to forecast water resource variables. The goal of this study was to create an ANN model to estimate the removal efficiency of biological oxygen demand (BOD), total nitrogen (TN), total phosphorus (TP) and total suspended solids (TSS) at the effluent of various primary and secondary treatment methods in a wastewater treatment plant (WWTP).
Design/methodology/approach
The MATLAB App Designer model was used to generate the data set. Various combinations of wastewater quality data, such as temperature(T), TN, TP and hydraulic retention time (HRT) are used as inputs into the ANN to assess the degree of effect of each of these variables on BOD, TN, TP and TSS removal efficiency. Two of the models reflect two different types of primary treatment, while the other nine models represent different types of subsequent treatment. The ANN model’s findings are compared to the MATLAB App Designer model. For evaluating model performance, mean square error (MSE) and coefficient of determination statistics (R2) are utilized as comparative metrics.
Findings
For both training and testing, the R values for the ANN models were greater than 0.99. Based on the comparisons, it was discovered that the ANN model can be used to estimate the removal efficiency of BOD, TN, TP and TSS in WWTP and that the ANN model produces very similar and satisfying results to the APPDESIGNER model. The R-value (Correlation coefficient) of 0.9909 and the MSE of 5.962 indicate that the model is accurate. Because of the many benefits of the ANN models used in this study, it has a lot of potential as a general modeling tool for a range of other complicated process systems that are difficult to solve using conventional modeling techniques.
Originality/value
The objective of this study was to develop an ANN model that could be used to estimate the removal efficiency of pollutants such as BOD, TN, TP and TSS at the effluent of various primary and secondary treatment methods in a WWTP. In the future, the ANN could be used to design a new WWTP and forecast the removal efficiency of pollutants.
Details
Keywords
Xiaojie Xu and Yun Zhang
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction…
Abstract
Purpose
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.
Design/methodology/approach
In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?
Findings
The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.
Originality/value
The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.
Details
Keywords
Luca Rampini and Fulvio Re Cecconi
The assessment of the Real Estate (RE) prices depends on multiple factors that traditional evaluation methods often struggle to fully understand. Housing prices, in particular…
Abstract
Purpose
The assessment of the Real Estate (RE) prices depends on multiple factors that traditional evaluation methods often struggle to fully understand. Housing prices, in particular, are the foundations for a better knowledge of the Built Environment and its characteristics. Recently, Machine Learning (ML) techniques, which are a subset of Artificial Intelligence, are gaining momentum in solving complex, non-linear problems like house price forecasting. Hence, this study deployed three popular ML techniques to predict dwelling prices in two cities in Italy.
Design/methodology/approach
An extensive dataset about house prices is collected through API protocol in two cities in North Italy, namely Brescia and Varese. This data is used to train and test three most popular ML models, i.e. ElasticNet, XGBoost and Artificial Neural Network, in order to predict house prices with six different features.
Findings
The models' performance was evaluated using the Mean Absolute Error (MAE) score. The results showed that the artificial neural network performed better than the others in predicting house prices, with a MAE 5% lower than the second-best model (which was the XGBoost).
Research limitations/implications
All the models had an accuracy drop in forecasting the most expensive cases, probably due to a lack of data.
Practical implications
The accessibility and easiness of the proposed model will allow future users to predict house prices with different datasets. Alternatively, further research may implement a different model using neural networks, knowing that they work better for this kind of task.
Originality/value
To date, this is the first comparison of the three most popular ML models that are usually employed when predicting house prices.
Details
Keywords
Qing Zhu, Yiqiong Wu, Yuze Li, Jing Han and Xiaoyang Zhou
Library intelligence institutions, which are a kind of traditional knowledge management organization, are at the frontline of the big data revolution, in which the use of…
Abstract
Purpose
Library intelligence institutions, which are a kind of traditional knowledge management organization, are at the frontline of the big data revolution, in which the use of unstructured data has become a modern knowledge management resource. The paper aims to discuss this issue.
Design/methodology/approach
This research combined theme logic structure (TLS), artificial neural network (ANN), and ensemble empirical mode decomposition (EEMD) to transform unstructured data into a signal-wave to examine the research characteristics.
Findings
Research characteristics have a vital effect on knowledge management activities and management behavior through concentration and relaxation, and ultimately form a quasi-periodic evolution. Knowledge management should actively control the evolution of the research characteristics because the natural development of six to nine years was found to be difficult to plot.
Originality/value
Periodic evaluation using TLS-ANN-EEMD gives insights into journal evolution and allows journal managers and contributors to follow the intrinsic mode functions and predict the journal research characteristics tendencies.
Details
Keywords
Paravee Maneejuk, Binxiong Zou and Woraphon Yamaka
The primary objective of this study is to investigate whether the inclusion of convertible bond prices as important inputs into artificial neural networks can lead to improved…
Abstract
Purpose
The primary objective of this study is to investigate whether the inclusion of convertible bond prices as important inputs into artificial neural networks can lead to improved accuracy in predicting Chinese stock prices. This novel approach aims to uncover the latent potential inherent in convertible bond dynamics, ultimately resulting in enhanced precision when forecasting stock prices.
Design/methodology/approach
The authors employed two machine learning models, namely the backpropagation neural network (BPNN) model and the extreme learning machine neural networks (ELMNN) model, on empirical Chinese financial time series data.
Findings
The results showed that the convertible bond price had a strong predictive power for low-market-value stocks but not for high-market-value stocks. The BPNN algorithm performed better than the ELMNN algorithm in predicting stock prices using the convertible bond price as an input indicator for low-market-value stocks. In contrast, ELMNN showed a significant decrease in prediction accuracy when the convertible bond price was added.
Originality/value
This study represents the initial endeavor to integrate convertible bond data into both the BPNN model and the ELMNN model for the purpose of predicting Chinese stock prices.
Details
Keywords
Isuru Udayangani Hewapathirana
This study explores the pioneering approach of utilising machine learning (ML) models and integrating social media data for predicting tourist arrivals in Sri Lanka.
Abstract
Purpose
This study explores the pioneering approach of utilising machine learning (ML) models and integrating social media data for predicting tourist arrivals in Sri Lanka.
Design/methodology/approach
Two sets of experiments are performed in this research. First, the predictive accuracy of three ML models, support vector regression (SVR), random forest (RF) and artificial neural network (ANN), is compared against the seasonal autoregressive integrated moving average (SARIMA) model using historical tourist arrivals as features. Subsequently, the impact of incorporating social media data from TripAdvisor and Google Trends as additional features is investigated.
Findings
The findings reveal that the ML models generally outperform the SARIMA model, particularly from 2019 to 2021, when several unexpected events occurred in Sri Lanka. When integrating social media data, the RF model performs significantly better during most years, whereas the SVR model does not exhibit significant improvement. Although adding social media data to the ANN model does not yield superior forecasts, it exhibits proficiency in capturing data trends.
Practical implications
The findings offer substantial implications for the industry's growth and resilience, allowing stakeholders to make accurate data-driven decisions to navigate the unpredictable dynamics of Sri Lanka's tourism sector.
Originality/value
This study presents the first exploration of ML models and the integration of social media data for forecasting Sri Lankan tourist arrivals, contributing to the advancement of research in this domain.
Details
Keywords
Zheng Xu, Yihai Fang, Nan Zheng and Hai L. Vu
With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.
Abstract
Purpose
With the aid of naturalistic simulations, this paper aims to investigate human behavior during manual and autonomous driving modes in complex scenarios.
Design/methodology/approach
The simulation environment is established by integrating virtual reality interface with a micro-simulation model. In the simulation, the vehicle autonomy is developed by a framework that integrates artificial neural networks and genetic algorithms. Human-subject experiments are carried, and participants are asked to virtually sit in the developed autonomous vehicle (AV) that allows for both human driving and autopilot functions within a mixed traffic environment.
Findings
Not surprisingly, the inconsistency is identified between two driving modes, in which the AV’s driving maneuver causes the cognitive bias and makes participants feel unsafe. Even though only a shallow portion of the cases that the AV ended up with an accident during the testing stage, participants still frequently intervened during the AV operation. On a similar note, even though the statistical results reflect that the AV drives under perceived high-risk conditions, rarely an actual crash can happen. This suggests that the classic safety surrogate measurement, e.g. time-to-collision, may require adjustment for the mixed traffic flow.
Research limitations/implications
Understanding the behavior of AVs and the behavioral difference between AVs and human drivers are important, where the developed platform is only the first effort to identify the critical scenarios where the AVs might fail to react.
Practical implications
This paper attempts to fill the existing research gap in preparing close-to-reality tools for AV experience and further understanding human behavior during high-level autonomous driving.
Social implications
This work aims to systematically analyze the inconsistency in driving patterns between manual and autopilot modes in various driving scenarios (i.e. multiple scenes and various traffic conditions) to facilitate user acceptance of AV technology.
Originality/value
A close-to-reality tool for AV experience and AV-related behavioral study. A systematic analysis in relation to the inconsistency in driving patterns between manual and autonomous driving. A foundation for identifying the critical scenarios where the AVs might fail to react.
Details