Search results

1 – 10 of 116
Open Access
Article
Publication date: 30 August 2021

Kailun Feng, Shiwei Chen, Weizhuo Lu, Shuo Wang, Bin Yang, Chengshuang Sun and Yaowu Wang

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is…

1418

Abstract

Purpose

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is continuously invoked during the optimisation trajectory, which increases the computational loads to levels unrealistic for timely construction decisions. Modification on the optimisation settings such as reducing searching ability is a popular method to address this challenge, but the quality measurement of the obtained optimal decisions, also termed as optimisation quality, is also reduced by this setting. Therefore, this study aims to develop an optimisation approach for construction planning that reduces the high computational loads of SO and provides reliable optimisation quality simultaneously.

Design/methodology/approach

This study proposes the optimisation approach by modifying the SO framework through establishing an embedded connection between simulation and optimisation technologies. This approach reduces the computational loads and ensures the optimisation quality associated with the conventional SO approach by accurately learning the knowledge from construction simulations using embedded ensemble learning algorithms, which automatically provides efficient and reliable fitness evaluations for optimisation iterations.

Findings

A large-scale project application shows that the proposed approach was able to reduce computational loads of SO by approximately 90%. Meanwhile, the proposed approach outperformed SO in terms of optimisation quality when the optimisation has limited searching ability.

Originality/value

The core contribution of this research is to provide an innovative method that improves efficiency and ensures effectiveness, simultaneously, of the well-known SO approach in construction applications. The proposed method is an alternative approach to SO that can run on standard computing platforms and support nearly real-time construction on-site decision-making.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 1
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 4 May 2021

Loris Nanni and Sheryl Brahnam

Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or…

1349

Abstract

Purpose

Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or two datasets/tasks. The purpose of this study is to create the most optimal and universal system for DNA-BP classification, one that performs competitively across several DNA-BP classification tasks.

Design/methodology/approach

Efficient DNA-BP classifier systems require the discovery of powerful protein representations and feature extraction methods. Experiments were performed that combined and compared descriptors extracted from state-of-the-art matrix/image protein representations. These descriptors were trained on separate support vector machines (SVMs) and evaluated. Convolutional neural networks with different parameter settings were fine-tuned on two matrix representations of proteins. Decisions were fused with the SVMs using the weighted sum rule and evaluated to experimentally derive the most powerful general-purpose DNA-BP classifier system.

Findings

The best ensemble proposed here produced comparable, if not superior, classification results on a broad and fair comparison with the literature across four different datasets representing a variety of DNA-BP classification tasks, thereby demonstrating both the power and generalizability of the proposed system.

Originality/value

Most DNA-BP methods proposed in the literature are only validated on one (rarely two) datasets/tasks. In this work, the authors report the performance of our general-purpose DNA-BP system on four datasets representing different DNA-BP classification tasks. The excellent results of the proposed best classifier system demonstrate the power of the proposed approach. These results can now be used for baseline comparisons by other researchers in the field.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 13 August 2020

Mariam AlKandari and Imtiaz Ahmad

Solar power forecasting will have a significant impact on the future of large-scale renewable energy plants. Predicting photovoltaic power generation depends heavily on climate…

10552

Abstract

Solar power forecasting will have a significant impact on the future of large-scale renewable energy plants. Predicting photovoltaic power generation depends heavily on climate conditions, which fluctuate over time. In this research, we propose a hybrid model that combines machine-learning methods with Theta statistical method for more accurate prediction of future solar power generation from renewable energy plants. The machine learning models include long short-term memory (LSTM), gate recurrent unit (GRU), AutoEncoder LSTM (Auto-LSTM) and a newly proposed Auto-GRU. To enhance the accuracy of the proposed Machine learning and Statistical Hybrid Model (MLSHM), we employ two diversity techniques, i.e. structural diversity and data diversity. To combine the prediction of the ensemble members in the proposed MLSHM, we exploit four combining methods: simple averaging approach, weighted averaging using linear approach and using non-linear approach, and combination through variance using inverse approach. The proposed MLSHM scheme was validated on two real-time series datasets, that sre Shagaya in Kuwait and Cocoa in the USA. The experiments show that the proposed MLSHM, using all the combination methods, achieved higher accuracy compared to the prediction of the traditional individual models. Results demonstrate that a hybrid model combining machine-learning methods with statistical method outperformed a hybrid model that only combines machine-learning models without statistical method.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 4 April 2023

Xiaojie Xu and Yun Zhang

Forecasts of commodity prices are vital issues to market participants and policy makers. Those of corn are of no exception, considering its strategic importance. In the present…

1026

Abstract

Purpose

Forecasts of commodity prices are vital issues to market participants and policy makers. Those of corn are of no exception, considering its strategic importance. In the present study, the authors assess the forecast problem for the weekly wholesale price index of yellow corn in China during January 1, 2010–January 10, 2020 period.

Design/methodology/approach

The authors employ the nonlinear auto-regressive neural network as the forecast tool and evaluate forecast performance of different model settings over algorithms, delays, hidden neurons and data splitting ratios in arriving at the final model.

Findings

The final model is relatively simple and leads to accurate and stable results. Particularly, it generates relative root mean square errors of 1.05%, 1.08% and 1.03% for training, validation and testing, respectively.

Originality/value

Through the analysis, the study shows usefulness of the neural network technique for commodity price forecasts. The results might serve as technical forecasts on a standalone basis or be combined with other fundamental forecasts for perspectives of price trends and corresponding policy analysis.

Details

EconomiA, vol. 24 no. 1
Type: Research Article
ISSN: 1517-7580

Keywords

Open Access
Article
Publication date: 3 August 2020

Rajashree Dash, Rasmita Rautray and Rasmita Dash

Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its…

1192

Abstract

Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its distinguishing features such as generalization ability, robustness and strong ability to tackle nonlinear problems, it appears to be more popular in financial time series modeling and prediction. In this paper, a Pi-Sigma Neural Network is designed for foretelling the future currency exchange rates in different prediction horizon. The unrevealed parameters of the network are interpreted by a hybrid learning algorithm termed as Shuffled Differential Evolution (SDE). The main motivation of this study is to integrate the partitioning and random shuffling scheme of Shuffled Frog Leaping algorithm with evolutionary steps of a Differential Evolution technique to obtain an optimal solution with an accelerated convergence rate. The efficiency of the proposed predictor model is actualized by predicting the exchange rate price of a US dollar against Swiss France (CHF) and Japanese Yen (JPY) accumulated within the same period of time.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 13 November 2018

Bo Liu, Libin Shen, Huanling You, Yan Dong, Jianqiang Li and Yong Li

The influence of road surface temperature (RST) on vehicles is becoming more and more obvious. Accurate predication of RST is distinctly meaningful. At present, however, the…

1019

Abstract

Purpose

The influence of road surface temperature (RST) on vehicles is becoming more and more obvious. Accurate predication of RST is distinctly meaningful. At present, however, the prediction accuracy of RST is not satisfied with physical methods or statistical learning methods. To find an effective prediction method, this paper selects five representative algorithms to predict the road surface temperature separately.

Design/methodology/approach

Multiple linear regressions, least absolute shrinkage and selection operator, random forest and gradient boosting regression tree (GBRT) and neural network are chosen to be representative predictors.

Findings

The experimental results show that for temperature data set of this experiment, the prediction effect of GBRT in the ensemble algorithm is the best compared with the other four algorithms.

Originality/value

This paper compares different kinds of machine learning algorithms, observes the road surface temperature data from different angles, and finds the most suitable prediction method.

Details

International Journal of Crowd Science, vol. 2 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 16 August 2021

Bo Qiu and Wei Fan

Metropolitan areas suffer from frequent road traffic congestion not only during peak hours but also during off-peak periods. Different machine learning methods have been used in…

Abstract

Purpose

Metropolitan areas suffer from frequent road traffic congestion not only during peak hours but also during off-peak periods. Different machine learning methods have been used in travel time prediction, however, such machine learning methods practically face the problem of overfitting. Tree-based ensembles have been applied in various prediction fields, and such approaches usually produce high prediction accuracy by aggregating and averaging individual decision trees. The inherent advantages of these approaches not only get better prediction results but also have a good bias-variance trade-off which can help to avoid overfitting. However, the reality is that the application of tree-based integration algorithms in traffic prediction is still limited. This study aims to improve the accuracy and interpretability of the models by using random forest (RF) to analyze and model the travel time on freeways.

Design/methodology/approach

As the traffic conditions often greatly change, the prediction results are often unsatisfactory. To improve the accuracy of short-term travel time prediction in the freeway network, a practically feasible and computationally efficient RF prediction method for real-world freeways by using probe traffic data was generated. In addition, the variables’ relative importance was ranked, which provides an investigation platform to gain a better understanding of how different contributing factors might affect travel time on freeways.

Findings

The parameters of the RF model were estimated by using the training sample set. After the parameter tuning process was completed, the proposed RF model was developed. The features’ relative importance showed that the variables (travel time 15 min before) and time of day (TOD) contribute the most to the predicted travel time result. The model performance was also evaluated and compared against the extreme gradient boosting method and the results indicated that the RF always produces more accurate travel time predictions.

Originality/value

This research developed an RF method to predict the freeway travel time by using the probe vehicle-based traffic data and weather data. Detailed information about the input variables and data pre-processing were presented. To measure the effectiveness of proposed travel time prediction algorithms, the mean absolute percentage errors were computed for different observation segments combined with different prediction horizons ranging from 15 to 60 min.

Details

Smart and Resilient Transportation, vol. 3 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 30 June 2021

Mohammad Abdullah

Financial health of a corporation is a great concern for every investor level and decision-makers. For many years, financial solvency prediction is a significant issue throughout…

3970

Abstract

Purpose

Financial health of a corporation is a great concern for every investor level and decision-makers. For many years, financial solvency prediction is a significant issue throughout academia, precisely in finance. This requirement leads this study to check whether machine learning can be implemented in financial solvency prediction.

Design/methodology/approach

This study analyzed 244 Dhaka stock exchange public-listed companies over the 2015–2019 period, and two subsets of data are also developed as training and testing datasets. For machine learning model building, samples are classified as secure, healthy and insolvent by the Altman Z-score. R statistical software is used to make predictive models of five classifiers and all model performances are measured with different performance metrics such as logarithmic loss (logLoss), area under the curve (AUC), precision recall AUC (prAUC), accuracy, kappa, sensitivity and specificity.

Findings

This study found that the artificial neural network classifier has 88% accuracy and sensitivity rate; also, AUC for this model is 96%. However, the ensemble classifier outperforms all other models by considering logLoss and other metrics.

Research limitations/implications

The major result of this study can be implicated to the financial institution for credit scoring, credit rating and loan classification, etc. And other companies can implement machine learning models to their enterprise resource planning software to trace their financial solvency.

Practical implications

Finally, a predictive application is developed through training a model with 1,200 observations and making it available for all rational and novice investors (Abdullah, 2020).

Originality/value

This study found that, with the best of author expertise, the author did not find any studies regarding machine learning research of financial solvency that examines a comparable number of a dataset, with all these models in Bangladesh.

Details

Journal of Asian Business and Economic Studies, vol. 28 no. 4
Type: Research Article
ISSN: 2515-964X

Keywords

Open Access
Article
Publication date: 20 July 2020

E.N. Osegi

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting…

Abstract

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting (STLF). A HTM Spatial Pooler (HTM-SP) stage is used to continually form sparse distributed representations (SDRs) from a univariate load time series data, a temporal aggregator is used to transform the SDRs into a sequential bivariate representation space and an overlap classifier makes temporal classifications from the bivariate SDRs through time. The comparative performance of HTM on several daily electrical load time series data including the Eunite competition dataset and the Polish power system dataset from 2002 to 2004 are presented. The robustness performance of HTM is also further validated using hourly load data from three more recent electricity markets. The results obtained from experimenting with the Eunite and Polish dataset indicated that HTM will perform better than the existing techniques reported in the literature. In general, the robustness test also shows that the error distribution performance of the proposed HTM technique is positively skewed for most of the years considered and with kurtosis values mostly lower than a base value of 3 indicating a reasonable level of outlier rejections.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 27 March 2018

Qing Zhu, Yiqiong Wu, Yuze Li, Jing Han and Xiaoyang Zhou

Library intelligence institutions, which are a kind of traditional knowledge management organization, are at the frontline of the big data revolution, in which the use of…

2783

Abstract

Purpose

Library intelligence institutions, which are a kind of traditional knowledge management organization, are at the frontline of the big data revolution, in which the use of unstructured data has become a modern knowledge management resource. The paper aims to discuss this issue.

Design/methodology/approach

This research combined theme logic structure (TLS), artificial neural network (ANN), and ensemble empirical mode decomposition (EEMD) to transform unstructured data into a signal-wave to examine the research characteristics.

Findings

Research characteristics have a vital effect on knowledge management activities and management behavior through concentration and relaxation, and ultimately form a quasi-periodic evolution. Knowledge management should actively control the evolution of the research characteristics because the natural development of six to nine years was found to be difficult to plot.

Originality/value

Periodic evaluation using TLS-ANN-EEMD gives insights into journal evolution and allows journal managers and contributors to follow the intrinsic mode functions and predict the journal research characteristics tendencies.

Details

Library Hi Tech, vol. 36 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of 116