Search results

1 – 10 of over 8000
Article
Publication date: 18 September 2023

Jianxiang Qiu, Jialiang Xie, Dongxiao Zhang and Ruping Zhang

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal…

Abstract

Purpose

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal hyperplane, which results in its sensitivity to noise. To solve this problem, this study proposes a twin support vector machine model based on fuzzy systems (FSTSVM).

Design/methodology/approach

This study designs an effective fuzzy membership assignment strategy based on fuzzy systems. It describes the relationship between the three inputs and the fuzzy membership of the sample by defining fuzzy inference rules and then exports the fuzzy membership of the sample. Combining this strategy with TSVM, the FSTSVM is proposed. Moreover, to speed up the model training, this study employs a coordinate descent strategy with shrinking by active set. To evaluate the performance of FSTSVM, this study conducts experiments designed on artificial data sets and UCI data sets.

Findings

The experimental results affirm the effectiveness of FSTSVM in addressing binary classification problems with noise, demonstrating its superior robustness and generalization performance compared to existing learning models. This can be attributed to the proposed fuzzy membership assignment strategy based on fuzzy systems, which effectively mitigates the adverse effects of noise.

Originality/value

This study designs a fuzzy membership assignment strategy based on fuzzy systems that effectively reduces the negative impact caused by noise and then proposes the noise-robust FSTSVM model. Moreover, the model employs a coordinate descent strategy with shrinking by active set to accelerate the training speed of the model.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 29 April 2021

Emmanuel Adinyira, Emmanuel Akoi-Gyebi Adjei, Kofi Agyekum and Frank Desmond Kofi Fugar

Knowledge of the effect of various cash-flow factors on expected project profit is important to effectively manage productivity on construction projects. This study was conducted…

Abstract

Purpose

Knowledge of the effect of various cash-flow factors on expected project profit is important to effectively manage productivity on construction projects. This study was conducted to develop and test the sensitivity of a Machine Learning Support Vector Regression Algorithm (SVRA) to predict construction project profit in Ghana.

Design/methodology/approach

The study relied on data from 150 institutional projects executed within the past five years (2014–2018) in developing the model. Eighty percent (80%) of the data from the 150 projects was used at hyperparameter selection and final training phases of the model development and the remaining 20% for model testing. Using MATLAB for Support Vector Regression, the parameters available for tuning were the epsilon values, the kernel scale, the box constraint and standardisations. The sensitivity index was computed to determine the degree to which the independent variables impact the dependent variable.

Findings

The developed model's predictions perfectly fitted the data and explained all the variability of the response data around its mean. Average predictive accuracy of 73.66% was achieved with all the variables on the different projects in validation. The developed SVR model was sensitive to labour and loan.

Originality/value

The developed SVRA combines variation, defective works and labour with other financial constraints, which have been the variables used in previous studies. It will aid contractors in predicting profit on completion at commencement and also provide information on the effect of changes to cash-flow factors on profit.

Details

Engineering, Construction and Architectural Management, vol. 28 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

Book part
Publication date: 25 October 2023

Md Aminul Islam and Md Abu Sufian

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The…

Abstract

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The study thoroughly investigated with advanced tools to scrutinize key performance indicators integral to the functioning of smart cities, thereby enhancing leadership and decision-making strategies. Our work involves the implementation of various machine learning models such as Logistic Regression, Support Vector Machine, Decision Tree, Naive Bayes, and Artificial Neural Networks (ANN), to the data. Notably, the Support Vector Machine and Bernoulli Naive Bayes models exhibit robust performance with an accuracy rate of 70% precision score. In particular, the study underscores the employment of an ANN model on our existing dataset, optimized using the Adam optimizer. Although the model yields an overall accuracy of 61% and a precision score of 58%, implying correct predictions for the positive class 58% of the time, a comprehensive performance assessment using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metrics was necessary. This evaluation results in a score of 0.475 at a threshold of 0.5, indicating that there's room for model enhancement. These models and their performance metrics serve as a key cog in our data analytics pipeline, providing decision-makers and city leaders with actionable insights that can steer urban service management decisions. Through real-time data availability and intuitive visualization dashboards, these leaders can promptly comprehend the current state of their services, pinpoint areas requiring improvement, and make informed decisions to bolster these services. This research illuminates the potential for data analytics, machine learning, and AI to significantly upgrade urban service management in smart cities, fostering sustainable and livable communities. Moreover, our findings contribute valuable knowledge to other cities aiming to adopt similar strategies, thus aiding the continued development of smart cities globally.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

Article
Publication date: 10 September 2024

Buse Un, Ercan Erdis, Serkan Aydınlı, Olcay Genc and Ozge Alboga

This study aims to develop a predictive model using machine learning techniques to forecast construction dispute outcomes, thereby minimizing economic and social losses and…

Abstract

Purpose

This study aims to develop a predictive model using machine learning techniques to forecast construction dispute outcomes, thereby minimizing economic and social losses and promoting amicable settlements between parties.

Design/methodology/approach

This study develops a novel conceptual model incorporating project characteristics, root causes, and underlying causes to predict construction dispute outcomes. Utilizing a dataset of arbitration cases in Türkiye, the model was tested using five machine learning algorithms namely Logistic Regression, Support Vector Machines, Decision Trees, K-Nearest Neighbors, and Random Forest in a Python environment. The performance of each algorithm was evaluated to identify the most accurate predictive model.

Findings

The analysis revealed that the Support Vector Machine algorithm achieved the highest prediction accuracy at 71.65%. Twelve significant variables were identified for the best model namely, work type, root causes, delays from a contractor, extension of time, different site conditions, poorly written contracts, unit price determination, penalties, price adjustment, acceptances, delay of schedule, and extra payment claims. The study’s results surpass some existing models in the literature, highlighting the model’s robustness and practical applicability in forecasting construction dispute outcomes.

Originality/value

This study is unique in its consideration of various contract, dispute, and project attributes to predict construction dispute outcomes using machine learning techniques. It uses a fact-based dataset of arbitration cases from Türkiye, providing a robust and practical predictive model applicable across different regions and project types. It advances the literature by comparing multiple machine learning algorithms to achieve the highest prediction accuracy and offering a comprehensive tool for proactive dispute management.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 24 December 2021

Neetika Jain and Sangeeta Mittal

A cost-effective way to achieve fuel economy is to reinforce positive driving behaviour. Driving behaviour can be controlled if drivers can be alerted for behaviour that results…

Abstract

Purpose

A cost-effective way to achieve fuel economy is to reinforce positive driving behaviour. Driving behaviour can be controlled if drivers can be alerted for behaviour that results in poor fuel economy. Fuel consumption must be tracked and monitored instantaneously rather than tracking average fuel economy for the entire trip duration. A single-step application of machine learning (ML) is not sufficient to model prediction of instantaneous fuel consumption and detection of anomalous fuel economy. The study designs an ML pipeline to track and monitor instantaneous fuel economy and detect anomalies.

Design/methodology/approach

This research iteratively applies different variations of a two-step ML pipeline to the driving dataset for hatchback cars. The first step addresses the problem of accurate measurement and prediction of fuel economy using time series driving data, and the second step detects abnormal fuel economy in relation to contextual information. Long short-term memory autoencoder method learns and uses the most salient features of time series data to build a regression model. The contextual anomaly is detected by following two approaches, kernel quantile estimator and one-class support vector machine. The kernel quantile estimator sets dynamic threshold for detecting anomalous behaviour. Any error beyond a threshold is classified as an anomaly. The one-class support vector machine learns training error pattern and applies the model to test data for anomaly detection. The two-step ML pipeline is further modified by replacing long short term memory autoencoder with gated recurrent network autoencoder, and the performance of both models is compared. The speed recommendations and feedback are issued to the driver based on detected anomalies for controlling aggressive behaviour.

Findings

A composite long short-term memory autoencoder was compared with gated recurrent unit autoencoder. Both models achieve prediction accuracy within a range of 98%–100% for prediction as a first step. Recall and accuracy metrics for anomaly detection using kernel quantile estimator remains within 98%–100%, whereas the one-class support vector machine approach performs within the range of 99.3%–100%.

Research limitations/implications

The proposed approach does not consider socio-demographics or physiological information of drivers due to privacy concerns. However, it can be extended to correlate driver's physiological state such as fatigue, sleep and stress to correlate with driving behaviour and fuel economy. The anomaly detection approach here is limited to providing feedback to driver, it can be extended to give contextual feedback to the steering controller or throttle controller. In the future, a controller-based system can be associated with an anomaly detection approach to control the acceleration and braking action of the driver.

Practical implications

The suggested approach is helpful in monitoring and reinforcing fuel-economical driving behaviour among fleet drivers as per different environmental contexts. It can also be used as a training tool for improving driving efficiency for new drivers. It keeps drivers engaged positively by issuing a relevant warning for significant contextual anomalies and avoids issuing a warning for minor operational errors.

Originality/value

This paper contributes to the existing literature by providing an ML pipeline approach to track and monitor instantaneous fuel economy rather than relying on average fuel economy values. The approach is further extended to detect contextual driving behaviour anomalies and optimises fuel economy. The main contributions for this approach are as follows: (1) a prediction model is applied to fine-grained time series driving data to predict instantaneous fuel consumption. (2) Anomalous fuel economy is detected by comparing prediction error against a threshold and analysing error patterns based on contextual information.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 August 2024

Shikha Pandey, Yogesh Iyer Murthy and Sumit Gandhi

This study aims to assess support vector machine (SVM) models' predictive ability to estimate half-cell potential (HCP) values from input parameters by using Bayesian…

Abstract

Purpose

This study aims to assess support vector machine (SVM) models' predictive ability to estimate half-cell potential (HCP) values from input parameters by using Bayesian optimization, grid search and random search.

Design/methodology/approach

A data set with 1,134 rows and 6 columns is used for principal component analysis (PCA) to minimize dimensionality and preserve 95% of explained variance. HCP is output from temperature, age, relative humidity, X and Y lengths. Root mean square error (RMSE), R-squared, mean squared error (MSE), mean absolute error, prediction speed and training time are used to measure model effectiveness. SHAPLEY analysis is also executed.

Findings

The study reveals variations in predictive performance across different optimization methods, with RMSE values ranging from 18.365 to 30.205 and R-squared values spanning from 0.88 to 0.96. Additionally, differences in training times, prediction speeds and model complexities are observed, highlighting the trade-offs between model accuracy and computational efficiency.

Originality/value

This study contributes to the understanding of SVM model efficacy in HCP prediction, emphasizing the importance of optimization techniques, model complexity and dimensionality reduction methods such as PCA.

Details

Anti-Corrosion Methods and Materials, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0003-5599

Keywords

Article
Publication date: 6 January 2021

Miao Fan and Ashutosh Sharma

In order to improve the accuracy of project cost prediction, considering the limitations of existing models, the construction cost prediction model based on SVM (Standard Support

Abstract

Purpose

In order to improve the accuracy of project cost prediction, considering the limitations of existing models, the construction cost prediction model based on SVM (Standard Support Vector Machine) and LSSVM (Least Squares Support Vector Machine) is put forward.

Design/methodology/approach

In the competitive growth and industries 4.0, the prediction in the cost plays a key role.

Findings

At the same time, the original data is dimensionality reduced. The processed data are imported into the SVM and LSSVM models for training and prediction respectively, and the prediction results are compared and analyzed and a more reasonable prediction model is selected.

Originality/value

The prediction result is further optimized by parameter optimization. The relative error of the prediction model is within 7%, and the prediction accuracy is high and the result is stable.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 27 September 2011

Aleksandar Kovačević, Dragan Ivanović, Branko Milosavljević, Zora Konjović and Dušan Surla

The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific…

1235

Abstract

Purpose

The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS).

Design/methodology/approach

The system is based on machine learning and performs automatic extraction and classification of metadata in eight pre‐defined categories. The extraction task is realised as a classification process. For the purpose of classification each row of text is represented with a vector that comprises different features: formatting, position, characteristics related to the words, etc. Experiments were performed with standard classification models. Both a single classifier with all eight categories and eight individual classifiers were tested. Classifiers were evaluated using the five‐fold cross validation, on a manually annotated corpus comprising 100 scientific papers in PDF format, collected from various conferences, journals and authors' personal web pages.

Findings

Based on the performances obtained on classification experiments, eight separate support vector machines (SVM) models (each of which recognises its corresponding category) were chosen. All eight models were established to have a good performance. The F‐measure was over 85 per cent for almost all of the classifiers and over 90 per cent for most of them.

Research limitations/implications

Automatically extracted metadata cannot be directly entered into CRIS UNS but requires control of the curators.

Practical implications

The proposed system for automatic metadata extraction using support vector machines model was integrated into the software system, CRIS UNS. Metadata extraction has been tested on the publications of researchers from the Department of Mathematics and Informatics of the Faculty of Sciences in Novi Sad. Analysis of extracted metadata from these publications showed that the performance of the system for the previously unseen data is in accordance with that obtained by the cross‐validation from eight separate SVM classifiers. This system will help in the process of synchronising metadata from CRIS UNS with other institutional repositories.

Originality/value

The paper documents a fully automated system for metadata extraction from scientific papers that was developed. The system is based on the SVM classifier and open source tools, and is capable of extracting eight types of metadata from scientific articles of any format that can be converted to PDF. Although developed as part of CRIS UNS, the proposed system can be integrated into other CRIS systems, as well as institutional repositories and library management systems.

Article
Publication date: 18 September 2023

Fatma Ben Hamadou, Taicir Mezghani, Ramzi Zouari and Mouna Boujelbène-Abbes

This study aims to assess the predictive performance of various factors on Bitcoin returns, used for the development of a robust forecasting support decision model using machine

Abstract

Purpose

This study aims to assess the predictive performance of various factors on Bitcoin returns, used for the development of a robust forecasting support decision model using machine learning techniques, before and during the COVID-19 pandemic. More specifically, the authors investigate the impact of the investor's sentiment on forecasting the Bitcoin returns.

Design/methodology/approach

This method uses feature selection techniques to assess the predictive performance of the different factors on the Bitcoin returns. Subsequently, the authors developed a forecasting model for the Bitcoin returns by evaluating the accuracy of three machine learning models, namely the one-dimensional convolutional neural network (1D-CNN), the bidirectional deep learning long short-term memory (BLSTM) neural networks and the support vector machine model.

Findings

The findings shed light on the importance of the investor's sentiment in enhancing the accuracy of the return forecasts. Furthermore, the investor's sentiment, the economic policy uncertainty (EPU), gold and the financial stress index (FSI) are the top best determinants before the COVID-19 outbreak. However, there was a significant decrease in the importance of financial uncertainty (FSI and EPU) during the COVID-19 pandemic, proving that investors attach much more importance to the sentimental side than to the traditional uncertainty factors. Regarding the forecasting model accuracy, the authors found that the 1D-CNN model showed the lowest prediction error before and during the COVID-19 and outperformed the other models. Therefore, it represents the best-performing algorithm among its tested counterparts, while the BLSTM is the least accurate model.

Practical implications

Moreover, this study contributes to a better understanding relevant for investors and policymakers to better forecast the returns based on a forecasting model, which can be used as a decision-making support tool. Therefore, the obtained results can drive the investors to uncover potential determinants, which forecast the Bitcoin returns. It actually gives more weight to the sentiment rather than financial uncertainties factors during the pandemic crisis.

Originality/value

To the authors’ knowledge, this is the first study to have attempted to construct a novel crypto sentiment measure and use it to develop a Bitcoin forecasting model. In fact, the development of a robust forecasting model, using machine learning techniques, offers a practical value as a decision-making support tool for investment strategies and policy formulation.

Details

EuroMed Journal of Business, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 2 March 2015

Jaganathan Gokulachandran and K. Mohandas

The accurate assessment of tool life of any given tool is a great significance in any manufacturing industry. The purpose of this paper is to predict the life of a cutting tool…

Abstract

Purpose

The accurate assessment of tool life of any given tool is a great significance in any manufacturing industry. The purpose of this paper is to predict the life of a cutting tool, in order to help decision making of the next scheduled replacement of tool and improve productivity.

Design/methodology/approach

This paper reports the use of two soft computing techniques, namely, neuro-fuzzy logic and support vector regression (SVR) techniques for the assessment of cutting tools. In this work, experiments are conducted based on Taguchi approach and tool life values are obtained.

Findings

The analysis is carried out using the two soft computing techniques. Tool life values are predicted using aforesaid techniques and these values are compared.

Practical implications

The proposed approaches are relatively simple and can be implemented easily by using software like MATLAB and Weka.

Originality/value

The proposed methodology compares neuro – fuzzy logic and SVR techniques.

Details

International Journal of Quality & Reliability Management, vol. 32 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of over 8000