Search results

1 – 10 of 709
Open Access
Article
Publication date: 18 July 2022

Youakim Badr

In this research, the authors demonstrate the advantage of reinforcement learning (RL) based intrusion detection systems (IDS) to solve very complex problems (e.g. selecting input…

1272

Abstract

Purpose

In this research, the authors demonstrate the advantage of reinforcement learning (RL) based intrusion detection systems (IDS) to solve very complex problems (e.g. selecting input features, considering scarce resources and constrains) that cannot be solved by classical machine learning. The authors include a comparative study to build intrusion detection based on statistical machine learning and representational learning, using knowledge discovery in databases (KDD) Cup99 and Installation Support Center of Expertise (ISCX) 2012.

Design/methodology/approach

The methodology applies a data analytics approach, consisting of data exploration and machine learning model training and evaluation. To build a network-based intrusion detection system, the authors apply dueling double deep Q-networks architecture enabled with costly features, k-nearest neighbors (K-NN), support-vector machines (SVM) and convolution neural networks (CNN).

Findings

Machine learning-based intrusion detection are trained on historical datasets which lead to model drift and lack of generalization whereas RL is trained with data collected through interactions. RL is bound to learn from its interactions with a stochastic environment in the absence of a training dataset whereas supervised learning simply learns from collected data and require less computational resources.

Research limitations/implications

All machine learning models have achieved high accuracy values and performance. One potential reason is that both datasets are simulated, and not realistic. It was not clear whether a validation was ever performed to show that data were collected from real network traffics.

Practical implications

The study provides guidelines to implement IDS with classical supervised learning, deep learning and RL.

Originality/value

The research applied the dueling double deep Q-networks architecture enabled with costly features to build network-based intrusion detection from network traffics. This research presents a comparative study of reinforcement-based instruction detection with counterparts built with statistical and representational machine learning.

Article
Publication date: 30 April 2020

Nasim Eslamirad, Soheil Malekpour Kolbadinejad, Mohammadjavad Mahdavinejad and Mohammad Mehranrad

This research aims to introduce a new methodology for integration between urban design strategies and supervised machine learning (SML) method – by applying both energy…

Abstract

Purpose

This research aims to introduce a new methodology for integration between urban design strategies and supervised machine learning (SML) method – by applying both energy engineering modeling (evaluating phase) for the existing green sidewalks and statistical energy modeling (predicting phase) for the new ones – to offer algorithms that help to catch the optimum morphology of green sidewalks, in case of high quality of the outdoor thermal comfort and less errors in results.

Design/methodology/approach

The tools of the study are the way of processing by SML, predicting the future based on the past. Machine learning is benefited from Python advantages. The structure of the study consisted of two main parts, as the majority of the similar studies follow: engineering energy modeling and statistical energy modeling. According to the concept of the study, at first, from 2268 models, some are randomly selected, simulated and sensitively analyzed by ENVI-met. Furthermore, the Envi-met output as the quantity of thermal comfort – predicted mean vote (PMV) and weather items are inputs of Python. Then, the formed data set is processed by SML, to reach the final reliable predicted output.

Findings

The process of SML leads the study to find thermal comfort of current models and other similar sidewalks. The results are evaluated by both PMV mathematical model and SML error evaluation functions. The results confirm that the average of the occurred error is about 1%. Then the method of study is reliable to apply in the variety of similar fields. Finding of this study can be helpful in perspective of the sustainable architecture strategies in the buildings and urban scales, to determine, monitor and control energy-based behaviors (thermal comfort, heating, cooling, lighting and ventilation) in operational phase of the systems (existed elements in buildings, and constructions) and the planning and designing phase of the future built cases – all over their life spans.

Research limitations/implications

Limitations of the study are related to the study variables and alternatives that are notable impact on the findings. Furthermore, the most trustable input data will result in the more accuracy in output. Then modeling and simulation processes are most significant part of the research to reach the exact results in the final step.

Practical implications

Finding of the study can be helpful in urban design strategies. By finding outdoor thermal comfort that resulted from machine learning method, urban and landscape designers, policymakers and architects are able to estimate the features of their designs in air quality and urban health and can be sure in catching design goals in case of thermal comfort in urban atmosphere.

Social implications

By 2030, cities are delved as living spaces for about three out of five people. As green infrastructures influence in moderating the cities’ climate, the relationship between green spaces and habitants’ thermal comfort is deduced. Although the strategies to outside thermal comfort improvement, by design methods and applicants, are not new subject to discuss, applying machines that may be common in predicting results can be called as a new insight in applying more effective design strategies and in urban environment’s comfort preparation. Then study’s footprint in social implications stems in learning from the previous projects and developing more efficient strategies to prepare cities as the more comfortable and healthy places to live, with the more efficient models and consuming money and time.

Originality/value

The study achievements are expected to be applied not only in Tehran but also in other climate zones as the pattern in more eco-city design strategies. Although some similar studies are done in different majors, the concept of study is new vision in urban studies.

Details

Smart and Sustainable Built Environment, vol. 9 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 25 October 2021

Danni Chen, JianDong Zhao, Peng Huang, Xiongna Deng and Tingting Lu

Sparrow search algorithm (SSA) is a novel global optimization method, but it is easy to fall into local optimization, which leads to its poor search accuracy and stability. The…

260

Abstract

Purpose

Sparrow search algorithm (SSA) is a novel global optimization method, but it is easy to fall into local optimization, which leads to its poor search accuracy and stability. The purpose of this study is to propose an improved SSA algorithm, called levy flight and opposition-based learning (LOSSA), based on LOSSA strategy. The LOSSA shows better search accuracy, faster convergence speed and stronger stability.

Design/methodology/approach

To further enhance the optimization performance of the algorithm, The Levy flight operation is introduced into the producers search process of the original SSA to enhance the ability of the algorithm to jump out of the local optimum. The opposition-based learning strategy generates better solutions for SSA, which is beneficial to accelerate the convergence speed of the algorithm. On the one hand, the performance of the LOSSA is evaluated by a set of numerical experiments based on classical benchmark functions. On the other hand, the hyper-parameter optimization problem of the Support Vector Machine (SVM) is also used to test the ability of LOSSA to solve practical problems.

Findings

First of all, the effectiveness of the two improved methods is verified by Wilcoxon signed rank test. Second, the statistical results of the numerical experiment show the significant improvement of the LOSSA compared with the original algorithm and other natural heuristic algorithms. Finally, the feasibility and effectiveness of the LOSSA in solving the hyper-parameter optimization problem of machine learning algorithms are demonstrated.

Originality/value

An improved SSA based on LOSSA is proposed in this paper. The experimental results show that the overall performance of the LOSSA is satisfactory. Compared with the SSA and other natural heuristic algorithms, the LOSSA shows better search accuracy, faster convergence speed and stronger stability. Moreover, the LOSSA also showed great optimization performance in the hyper-parameter optimization of the SVM model.

Details

Assembly Automation, vol. 41 no. 6
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 24 August 2010

Yi‐nan Guo, Mei Yang and Da‐wei Xiao

The purpose of this paper is to find a novel optimization selection method for hyperparameter of support vector classification (SVC), responsible for the classification of…

Abstract

Purpose

The purpose of this paper is to find a novel optimization selection method for hyperparameter of support vector classification (SVC), responsible for the classification of datasets from the UCI machine learning database repository.

Design/methodology/approach

A novel two‐stage optimization selection method for hyperparameters is proposed. It makes use of explicit information derived from issues and implicit knowledge extracted from the evolution process so as to improve the performance of classifier. In the first stage, the search extent of each hyperparameter is determined according to the requirements of issues. In the second stage, optimal hyperparameters are obtained by adaptive chaotic culture algorithm in the above search extent. Adaptive chaotic cultural algorithm uses implicit knowledge extracted from the evolution process to control mutation scale of chaotic mutation operator. This algorithm can ensure the diversity of population and exploitation in the latter evolution.

Findings

The rationality of the above optimization selection method is proved by the binary classification problem. Final confirmation of this approach is the classification results compared with other methods.

Originality/value

This optimization selection method can effectively avoid premature convergence and lead to better computation stability and precision. It is not related on the structure of functions. SVC model corresponding to optimal hyperparameters by this method has better generalization.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 27 July 2021

Xiaohuan Liu, Degan Zhang, Ting Zhang, Jie Zhang and Jiaxu Wang

To solve the path planning problem of the intelligent driving vehicular, this paper designs a hybrid path planning algorithm based on optimized reinforcement learning (RL) and…

Abstract

Purpose

To solve the path planning problem of the intelligent driving vehicular, this paper designs a hybrid path planning algorithm based on optimized reinforcement learning (RL) and improved particle swarm optimization (PSO).

Design/methodology/approach

First, the authors optimized the hyper-parameters of RL to make it converge quickly and learn more efficiently. Then the authors designed a pre-set operation for PSO to reduce the calculation of invalid particles. Finally, the authors proposed a correction variable that can be obtained from the cumulative reward of RL; this revises the fitness of the individual optimal particle and global optimal position of PSO to achieve an efficient path planning result. The authors also designed a selection parameter system to help to select the optimal path.

Findings

Simulation analysis and experimental test results proved that the proposed algorithm has advantages in terms of practicability and efficiency. This research also foreshadows the research prospects of RL in path planning, which is also the authors’ next research direction.

Originality/value

The authors designed a pre-set operation to reduce the participation of invalid particles in the calculation in PSO. And then, the authors designed a method to optimize hyper-parameters to improve learning efficiency of RL. And then they used RL trained PSO to plan path. The authors also proposed an optimal path evaluation system. This research also foreshadows the research prospects of RL in path planning, which is also the authors’ next research direction.

Details

Engineering Computations, vol. 39 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 17 October 2023

Derya Deliktaş and Dogan Aydin

Assembly lines are widely employed in manufacturing processes to produce final products in a flow efficiently. The simple assembly line balancing problem is a basic version of the…

Abstract

Purpose

Assembly lines are widely employed in manufacturing processes to produce final products in a flow efficiently. The simple assembly line balancing problem is a basic version of the general problem and has still attracted the attention of researchers. The type-I simple assembly line balancing problems (SALBP-I) aim to minimise the number of workstations on an assembly line by keeping the cycle time constant.

Design/methodology/approach

This paper focuses on solving multi-objective SALBP-I problems by utilising an artificial bee colony based-hyper heuristic (ABC-HH) algorithm. The algorithm optimises the efficiency and idleness percentage of the assembly line and concurrently minimises the number of workstations. The proposed ABC-HH algorithm is improved by adding new modifications to each phase of the artificial bee colony framework. Parameter control and calibration are also achieved using the irace method. The proposed model has undergone testing on benchmark problems, and the results obtained have been compared with state-of-the-art algorithms.

Findings

The experimental results of the computational study on the benchmark dataset unequivocally establish the superior performance of the ABC-HH algorithm across 61 problem instances, outperforming the state-of-the-art approach.

Originality/value

This research proposes the ABC-HH algorithm with local search to solve the SALBP-I problems more efficiently.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 March 2022

Mushi Li, Zhao Liu, Li Huang and Ping Zhu

Compared with the low-fidelity model, the high-fidelity model has both the advantage of high accuracy, and the disadvantage of low efficiency and high cost. A series of…

Abstract

Purpose

Compared with the low-fidelity model, the high-fidelity model has both the advantage of high accuracy, and the disadvantage of low efficiency and high cost. A series of multi-fidelity surrogate modelling method were developed to give full play to the respective advantages of both low-fidelity and high-fidelity models. However, most multi-fidelity surrogate modelling methods are sensitive to the amount of high-fidelity data. The purpose of this paper is to propose a multi fidelity surrogate modelling method whose accuracy is less dependent on the amount of high-fidelity data.

Design/methodology/approach

A multi-fidelity surrogate modelling method based on neural networks was proposed in this paper, which utilizes transfer learning ideas to explore the correlation between different fidelity datasets. A low-fidelity neural network was built by using a sufficient amount of low-fidelity data, which was then finetuned by a very small amount of HF data to obtain a multi-fidelity neural network based on this correlation.

Findings

Numerical examples were used in this paper, which proved the validity of the proposed method, and the influence of neural network hyper-parameters on the prediction accuracy of the multi-fidelity model was discussed.

Originality/value

Through the comparison with existing methods, case study shows that when the number of high-fidelity sample points is very small, the R-square of the proposed model exceeds the existing model by more than 0.3, which shows that the proposed method can be applied to reducing the cost of complex engineering design problems.

Details

Engineering Computations, vol. 39 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 10 August 2010

Shamsuddin Ahmed

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to identify…

Abstract

Purpose

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to identify initial solutions in artificial neural network (ANN) training.

Design/methodology/approach

A multi‐directional ANN training algorithm that needs no derivative information is introduced as constrained one‐dimensional problem. A directional search vector examines the ANN error function in weight parameter space. The search vector moves in all possible directions to find minimum function value. The network weights are increased or decreased depending on the shape of the error function hyper surface such that the search vector finds descent directions. The minimum function value is thus determined. To accelerate the convergence of the algorithm a momentum search is designed. It avoids overshooting the local minimum.

Findings

The training algorithm is insensitive to the initial starting weights in comparison with the gradient‐based methods. Therefore, it can locate a relative local minimum from anywhere of the error surface. It is an important property of this training method. The algorithm is suitable for error functions that are discontinuous, ill conditioned or the derivative of the error function is not readily available. It improves over the standard back propagation method in convergence and avoids premature termination near pseudo local minimum.

Research limitations/implications

Classifications problems are efficiently classified when using this method but the complex time series in some instances slows convergence due to complexity of the error surface. Different ANN network structure can further be investigated to find the performance of the algorithm.

Practical implications

The search scheme moves along the valleys and ridges of the error function to trace minimum neighborhood. The algorithm only evaluates the error function. As soon as the algorithm detects flat surface of the error function, care is taken to avoid slow convergence.

Originality/value

The algorithm is efficient due to incorporation of three important methodologies. The first mechanism is the momentum search. The second methodology is the implementation of directional search vector in coordinate directions. The third procedure is the one‐dimensional search in constrained region to identify the self‐adaptive learning rates, to improve convergence.

Details

Kybernetes, vol. 39 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 11 November 2021

Sandeep Kumar Hegde and Monica R. Mundada

Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio…

Abstract

Purpose

Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio vasculardisease (CVD) and chronic kidney disease (CKD) are major chronic diseases responsible for millions of death. Each of these diseases is considered as a risk factor for the other two diseases. Therefore, noteworthy attention is being paid to reduce the risk of these diseases. A gigantic amount of medical data is generated in digital form from smart healthcare appliances in the current era. Although numerous machine learning (ML) algorithms are proposed for the early prediction of chronic diseases, these algorithmic models are neither generalized nor adaptive when the model is imposed on new disease datasets. Hence, these algorithms have to process a huge amount of disease data iteratively until the model converges. This limitation may make it difficult for ML models to fit and produce imprecise results. A single algorithm may not yield accurate results. Nonetheless, an ensemble of classifiers built from multiple models, that works based on a voting principle has been successfully applied to solve many classification tasks. The purpose of this paper is to make early prediction of chronic diseases using hybrid generative regression based deep intelligence network (HGRDIN) model.

Design/methodology/approach

In the proposed paper generative regression (GR) model is used in combination with deep neural network (DNN) for the early prediction of chronic disease. The GR model will obtain prior knowledge about the labelled data by analyzing the correlation between features and class labels. Hence, the weight assignment process of DNN is influenced by the relationship between attributes rather than random assignment. The knowledge obtained through these processes is passed as input to the DNN network for further prediction. Since the inference about the input data instances is drawn at the DNN through the GR model, the model is named as hybrid generative regression-based deep intelligence network (HGRDIN).

Findings

The credibility of the implemented approach is rigorously validated using various parameters such as accuracy, precision, recall, F score and area under the curve (AUC) score. During the training phase, the proposed algorithm is constantly regularized using the elastic net regularization technique and also hyper-tuned using the various parameters such as momentum and learning rate to minimize the misprediction rate. The experimental results illustrate that the proposed approach predicted the chronic disease with a minimal error by avoiding the possible overfitting and local minima problems. The result obtained with the proposed approach is also compared with the various traditional approaches.

Research limitations/implications

Usually, the diagnostic data are multi-dimension in nature where the performance of the ML algorithm will degrade due to the data overfitting, curse of dimensionality issues. The result obtained through the experiment has achieved an average accuracy of 95%. Hence, analysis can be made further to improve predictive accuracy by overcoming the curse of dimensionality issues.

Practical implications

The proposed ML model can mimic the behavior of the doctor's brain. These algorithms have the capability to replace clinical tasks. The accurate result obtained through the innovative algorithms can free the physician from the mundane care and practices so that the physician can focus more on the complex issues.

Social implications

Utilizing the proposed predictive model at the decision-making level for the early prediction of the disease is considered as a promising change towards the healthcare sector. The global burden of chronic disease can be reduced at an exceptional level through these approaches.

Originality/value

In the proposed HGRDIN model, the concept of transfer learning approach is used where the knowledge acquired through the GR process is applied on DNN that identified the possible relationship between the dependent and independent feature variables by mapping the chronic data instances to its corresponding target class before it is being passed as input to the DNN network. Hence, the result of the experiments illustrated that the proposed approach obtained superior performance in terms of various validation parameters than the existing conventional techniques.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 14 August 2017

Ming-min Liu, L.Z. Li and Jun Zhang

The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.

Abstract

Purpose

The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.

Design/methodology/approach

Instead of transmitting data of curved surfaces in 3D space directly, the method transmits data by unfolding 3D curved surfaces into 2D planes by manifold learning algorithms. The similarity between surface unfolding and manifold learning is discussed. Projection ability of several manifold learning algorithms is investigated to unfold curved surface. The algorithms’ efficiency and their influences on the accuracy of data transmission are investigated by three examples.

Findings

It is found that the data interpolations using manifold learning algorithms LLE, HLLE and LTSA are efficient and accurate.

Originality/value

The method can improve the accuracies of coupling data interpolation and fluid-structure interaction simulation involving curved surfaces.

Details

Multidiscipline Modeling in Materials and Structures, vol. 13 no. 2
Type: Research Article
ISSN: 1573-6105

Keywords

1 – 10 of 709