Search results

1 – 10 of 42
Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 26 September 2023

Mohammed Ayoub Ledhem and Warda Moussaoui

This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric…

Abstract

Purpose

This paper aims to apply several data mining techniques for predicting the daily precision improvement of Jakarta Islamic Index (JKII) prices based on big data of symmetric volatility in Indonesia’s Islamic stock market.

Design/methodology/approach

This research uses big data mining techniques to predict daily precision improvement of JKII prices by applying the AdaBoost, K-nearest neighbor, random forest and artificial neural networks. This research uses big data with symmetric volatility as inputs in the predicting model, whereas the closing prices of JKII were used as the target outputs of daily precision improvement. For choosing the optimal prediction performance according to the criteria of the lowest prediction errors, this research uses four metrics of mean absolute error, mean squared error, root mean squared error and R-squared.

Findings

The experimental results determine that the optimal technique for predicting the daily precision improvement of the JKII prices in Indonesia’s Islamic stock market is the AdaBoost technique, which generates the optimal predicting performance with the lowest prediction errors, and provides the optimum knowledge from the big data of symmetric volatility in Indonesia’s Islamic stock market. In addition, the random forest technique is also considered another robust technique in predicting the daily precision improvement of the JKII prices as it delivers closer values to the optimal performance of the AdaBoost technique.

Practical implications

This research is filling the literature gap of the absence of using big data mining techniques in the prediction process of Islamic stock markets by delivering new operational techniques for predicting the daily stock precision improvement. Also, it helps investors to manage the optimal portfolios and to decrease the risk of trading in global Islamic stock markets based on using big data mining of symmetric volatility.

Originality/value

This research is a pioneer in using big data mining of symmetric volatility in the prediction of an Islamic stock market index.

Details

Journal of Modelling in Management, vol. 19 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 19 January 2024

Ping Huang, Haitao Ding, Hong Chen, Jianwei Zhang and Zhenjia Sun

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs…

Abstract

Purpose

The growing availability of naturalistic driving datasets (NDDs) presents a valuable opportunity to develop various models for autonomous driving. However, while current NDDs include data on vehicles with and without intended driving behavior changes, they do not explicitly demonstrate a type of data on vehicles that intend to change their driving behavior but do not execute the behaviors because of safety, efficiency, or other factors. This missing data is essential for autonomous driving decisions. This study aims to extract the driving data with implicit intentions to support the development of decision-making models.

Design/methodology/approach

According to Bayesian inference, drivers who have the same intended changes likely share similar influencing factors and states. Building on this principle, this study proposes an approach to extract data on vehicles that intended to execute specific behaviors but failed to do so. This is achieved by computing driving similarities between the candidate vehicles and benchmark vehicles with incorporation of the standard similarity metrics, which takes into account information on the surrounding vehicles' location topology and individual vehicle motion states. By doing so, the method enables a more comprehensive analysis of driving behavior and intention.

Findings

The proposed method is verified on the Next Generation SIMulation dataset (NGSim), which confirms its ability to reveal similarities between vehicles executing similar behaviors during the decision-making process in nature. The approach is also validated using simulated data, achieving an accuracy of 96.3 per cent in recognizing vehicles with specific driving behavior intentions that are not executed.

Originality/value

This study provides an innovative approach to extract driving data with implicit intentions and offers strong support to develop data-driven decision-making models for autonomous driving. With the support of this approach, the development of autonomous vehicles can capture more real driving experience from human drivers moving towards a safer and more efficient future.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 23 September 2022

Hossein Sohrabi and Esmatullah Noorzai

The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the…

Abstract

Purpose

The present study aims to develop a risk-supported case-based reasoning (RS-CBR) approach for water-related projects by incorporating various uncertainties and risks in the revision step.

Design/methodology/approach

The cases were extracted by studying 68 water-related projects. This research employs earned value management (EVM) factors to consider time and cost features and economic, natural, technical, and project risks to account for uncertainties and supervised learning models to estimate cost overrun. Time-series algorithms were also used to predict construction cost indexes (CCI) and model improvements in future forecasts. Outliers were deleted by the pre-processing process. Next, datasets were split into testing and training sets, and algorithms were implemented. The accuracy of different models was measured with the mean absolute percentage error (MAPE) and the normalized root mean square error (NRSME) criteria.

Findings

The findings show an improvement in the accuracy of predictions using datasets that consider uncertainties, and ensemble algorithms such as Random Forest and AdaBoost had higher accuracy. Also, among the single algorithms, the support vector regressor (SVR) with the sigmoid kernel outperformed the others.

Originality/value

This research is the first attempt to develop a case-based reasoning model based on various risks and uncertainties. The developed model has provided an approving overlap with machine learning models to predict cost overruns. The model has been implemented in collected water-related projects and results have been reported.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 2
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 1 January 2024

Shrutika Sharma, Vishal Gupta, Deepa Mudgal and Vishal Srivastava

Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to…

Abstract

Purpose

Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to experiment with different printing settings. The current study aims to propose a regression-based machine learning model to predict the mechanical behavior of ulna bone plates.

Design/methodology/approach

The bone plates were formed using fused deposition modeling (FDM) technique, with printing attributes being varied. The machine learning models such as linear regression, AdaBoost regression, gradient boosting regression (GBR), random forest, decision trees and k-nearest neighbors were trained for predicting tensile strength and flexural strength. Model performance was assessed using root mean square error (RMSE), coefficient of determination (R2) and mean absolute error (MAE).

Findings

Traditional experimentation with various settings is both time-consuming and expensive, emphasizing the need for alternative approaches. Among the models tested, GBR model demonstrated the best performance in predicting both tensile and flexural strength and achieved the lowest RMSE, highest R2 and lowest MAE, which are 1.4778 ± 0.4336 MPa, 0.9213 ± 0.0589 and 1.2555 ± 0.3799 MPa, respectively, and 3.0337 ± 0.3725 MPa, 0.9269 ± 0.0293 and 2.3815 ± 0.2915 MPa, respectively. The findings open up opportunities for doctors and surgeons to use GBR as a reliable tool for fabricating patient-specific bone plates, without the need for extensive trial experiments.

Research limitations/implications

The current study is limited to the usage of a few models. Other machine learning-based models can be used for prediction-based study.

Originality/value

This study uses machine learning to predict the mechanical properties of FDM-based distal ulna bone plate, replacing traditional design of experiments methods with machine learning to streamline the production of orthopedic implants. It helps medical professionals, such as physicians and surgeons, make informed decisions when fabricating customized bone plates for their patients while reducing the need for time-consuming experimentation, thereby addressing a common limitation of 3D printing medical implants.

Details

Rapid Prototyping Journal, vol. 30 no. 3
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 3 April 2024

Rizwan Ali, Jin Xu, Mushahid Hussain Baig, Hafiz Saif Ur Rehman, Muhammad Waqas Aslam and Kaleem Ullah Qasim

This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates…

Abstract

Purpose

This study aims to endeavour to decode artificial intelligence (AI)-based tokens' complex dynamics and predictability using a comprehensive multivariate framework that integrates technical and macroeconomic indicators.

Design/methodology/approach

In this study we used advance machine learning techniques, such as gradient boosting regression (GBR), random forest (RF) and notably long short-term memory (LSTM) networks, this research provides a nuanced understanding of the factors driving the performance of AI tokens. The study’s comparative analysis highlights the superior predictive capabilities of LSTM models, as evidenced by their performance across various AI digital tokens such as AGIX-singularity-NET, Cortex and numeraire NMR.

Findings

This study finding shows that through an intricate exploration of feature importance and the impact of speculative behaviour, the research elucidates the long-term patterns and resilience of AI-based tokens against economic shifts. The SHapley Additive exPlanations (SHAP) analysis results show that technical and some macroeconomic factors play a dominant role in price production. It also examines the potential of these models for strategic investment and hedging, underscoring their relevance in an increasingly digital economy.

Originality/value

According to our knowledge, the absence of AI research frameworks for forecasting and modelling current aria-leading AI tokens is apparent. Due to a lack of study on understanding the relationship between the AI token market and other factors, forecasting is outstandingly demanding. This study provides a robust predictive framework to accurately identify the changing trends of AI tokens within a multivariate context and fill the gaps in existing research. We can investigate detailed predictive analytics with the help of modern AI algorithms and correct model interpretation to elaborate on the behaviour patterns of developing decentralised digital AI-based token prices.

Details

Journal of Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 26 December 2023

Farshad Peiman, Mohammad Khalilzadeh, Nasser Shahsavari-Pour and Mehdi Ravanshadnia

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the…

Abstract

Purpose

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the accuracy and actualization of predicted values. This study primarily aimed to examine natural gradient boosting (NGBoost-2020) with the classification and regression trees (CART) base model (base learner). To the best of the authors' knowledge, this concept has never been applied to EVM AD forecasting problem. Consequently, the authors compared this method to the single K-nearest neighbor (KNN) method, the ensemble method of extreme gradient boosting (XGBoost-2016) with the CART base model and the optimal equation of EVM, the earned schedule (ES) equation with the performance factor equal to 1 (ES1). The paper also sought to determine the extent to which the World Bank's two legal factors affect countries and how the two legal causes of delay (related to institutional flaws) influence AD prediction models.

Design/methodology/approach

In this paper, data from 30 construction projects of various building types in Iran, Pakistan, India, Turkey, Malaysia and Nigeria (due to the high number of delayed projects and the detrimental effects of these delays in these countries) were used to develop three models. The target variable of the models was a dimensionless output, the ratio of estimated duration to completion (ETC(t)) to planned duration (PD). Furthermore, 426 tracking periods were used to build the three models, with 353 samples and 23 projects in the training set, 73 patterns (17% of the total) and six projects (21% of the total) in the testing set. Furthermore, 17 dimensionless input variables were used, including ten variables based on the main variables and performance indices of EVM and several other variables detailed in the study. The three models were subsequently created using Python and several GitHub-hosted codes.

Findings

For the testing set of the optimal model (NGBoost), the better percentage mean (better%) of the prediction error (based on projects with a lower error percentage) of the NGBoost compared to two KNN and ES1 single models, as well as the total mean absolute percentage error (MAPE) and mean lags (MeLa) (indicating model stability) were 100, 83.33, 5.62 and 3.17%, respectively. Notably, the total MAPE and MeLa for the NGBoost model testing set, which had ten EVM-based input variables, were 6.74 and 5.20%, respectively. The ensemble artificial intelligence (AI) models exhibited a much lower MAPE than ES1. Additionally, ES1 was less stable in prediction than NGBoost. The possibility of excessive and unusual MAPE and MeLa values occurred only in the two single models. However, on some data sets, ES1 outperformed AI models. NGBoost also outperformed other models, especially single models for most developing countries, and was more accurate than previously presented optimized models. In addition, sensitivity analysis was conducted on the NGBoost predicted outputs of 30 projects using the SHapley Additive exPlanations (SHAP) method. All variables demonstrated an effect on ETC(t)/PD. The results revealed that the most influential input variables in order of importance were actual time (AT) to PD, regulatory quality (RQ), earned duration (ED) to PD, schedule cost index (SCI), planned complete percentage, rule of law (RL), actual complete percentage (ACP) and ETC(t) of the ES optimal equation to PD. The probabilistic hybrid model was selected based on the outputs predicted by the NGBoost and XGBoost models and the MAPE values from three AI models. The 95% prediction interval of the NGBoost–XGBoost model revealed that 96.10 and 98.60% of the actual output values of the testing and training sets are within this interval, respectively.

Research limitations/implications

Due to the use of projects performed in different countries, it was not possible to distribute the questionnaire to the managers and stakeholders of 30 projects in six developing countries. Due to the low number of EVM-based projects in various references, it was unfeasible to utilize other types of projects. Future prospects include evaluating the accuracy and stability of NGBoost for timely and non-fluctuating projects (mostly in developed countries), considering a greater number of legal/institutional variables as input, using legal/institutional/internal/inflation inputs for complex projects with extremely high uncertainty (such as bridge and road construction) and integrating these inputs and NGBoost with new technologies (such as blockchain, radio frequency identification (RFID) systems, building information modeling (BIM) and Internet of things (IoT)).

Practical implications

The legal/intuitive recommendations made to governments are strict control of prices, adequate supervision, removal of additional rules, removal of unfair regulations, clarification of the future trend of a law change, strict monitoring of property rights, simplification of the processes for obtaining permits and elimination of unnecessary changes particularly in developing countries and at the onset of irregular projects with limited information and numerous uncertainties. Furthermore, the managers and stakeholders of this group of projects were informed of the significance of seven construction variables (institutional/legal external risks, internal factors and inflation) at an early stage, using time series (dynamic) models to predict AD, accurate calculation of progress percentage variables, the effectiveness of building type in non-residential projects, regular updating inflation during implementation, effectiveness of employer type in the early stage of public projects in addition to the late stage of private projects, and allocating reserve duration (buffer) in order to respond to institutional/legal risks.

Originality/value

Ensemble methods were optimized in 70% of references. To the authors' knowledge, NGBoost from the set of ensemble methods was not used to estimate construction project duration and delays. NGBoost is an effective method for considering uncertainties in irregular projects and is often implemented in developing countries. Furthermore, AD estimation models do fail to incorporate RQ and RL from the World Bank's worldwide governance indicators (WGI) as risk-based inputs. In addition, the various WGI, EVM and inflation variables are not combined with substantial degrees of delay institutional risks as inputs. Consequently, due to the existence of critical and complex risks in different countries, it is vital to consider legal and institutional factors. This is especially recommended if an in-depth, accurate and reality-based method like SHAP is used for analysis.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 18 September 2023

Jianxiang Qiu, Jialiang Xie, Dongxiao Zhang and Ruping Zhang

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal…

Abstract

Purpose

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal hyperplane, which results in its sensitivity to noise. To solve this problem, this study proposes a twin support vector machine model based on fuzzy systems (FSTSVM).

Design/methodology/approach

This study designs an effective fuzzy membership assignment strategy based on fuzzy systems. It describes the relationship between the three inputs and the fuzzy membership of the sample by defining fuzzy inference rules and then exports the fuzzy membership of the sample. Combining this strategy with TSVM, the FSTSVM is proposed. Moreover, to speed up the model training, this study employs a coordinate descent strategy with shrinking by active set. To evaluate the performance of FSTSVM, this study conducts experiments designed on artificial data sets and UCI data sets.

Findings

The experimental results affirm the effectiveness of FSTSVM in addressing binary classification problems with noise, demonstrating its superior robustness and generalization performance compared to existing learning models. This can be attributed to the proposed fuzzy membership assignment strategy based on fuzzy systems, which effectively mitigates the adverse effects of noise.

Originality/value

This study designs a fuzzy membership assignment strategy based on fuzzy systems that effectively reduces the negative impact caused by noise and then proposes the noise-robust FSTSVM model. Moreover, the model employs a coordinate descent strategy with shrinking by active set to accelerate the training speed of the model.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 26 February 2024

Chong Wu, Xiaofang Chen and Yongjie Jiang

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…

Abstract

Purpose

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.

Design/methodology/approach

In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.

Findings

An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.

Originality/value

Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 3 November 2023

Vimala Balakrishnan, Aainaa Nadia Mohammed Hashim, Voon Chung Lee, Voon Hee Lee and Ying Qiu Lee

This study aims to develop a machine learning model to detect structure fire fatalities using a dataset comprising 11,341 cases from 2011 to 2019.

31

Abstract

Purpose

This study aims to develop a machine learning model to detect structure fire fatalities using a dataset comprising 11,341 cases from 2011 to 2019.

Design/methodology/approach

Exploratory data analysis (EDA) was conducted prior to modelling, in which ten machine learning models were experimented with.

Findings

The main fatal structure fire risk factors were fires originating from bedrooms, living areas and the cooking/dining areas. The highest fatality rate (20.69%) was reported for fires ignited due to bedding (23.43%), despite a low fire incident rate (3.50%). Using 21 structure fire features, Random Forest (RF) yielded the best detection performance with 86% accuracy, followed by Decision Tree (DT) with bagging (accuracy = 84.7%).

Research limitations/practical implications

Limitations of the study are pertaining to data quality and grouping of categories in the data pre-processing stage, which could affect the performance of the models.

Originality/value

The study is the first of its kind to manipulate risk factors to detect fatal structure classification, particularly focussing on structure fire fatalities. Most of the previous studies examined the importance of fire risk factors and their relationship to the fire risk level.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 42