Search results

1 – 10 of over 31000
To view the access options for this content please click here
Article
Publication date: 1 March 2021

Hardi M. Mohammed, Zrar Kh. Abdul, Tarik A. Rashid, Abeer Alsadoon and Nebojsa Bacanin

This paper aims at studying meta-heuristic algorithms. One of the common meta-heuristic optimization algorithms is called grey wolf optimization (GWO). The key aim is to…

Abstract

Purpose

This paper aims at studying meta-heuristic algorithms. One of the common meta-heuristic optimization algorithms is called grey wolf optimization (GWO). The key aim is to enhance the limitations of the wolves’ searching process of attacking gray wolves.

Design/methodology/approach

The development of meta-heuristic algorithms has increased by researchers to use them extensively in the field of business, science and engineering. In this paper, the K-means clustering algorithm is used to enhance the performance of the original GWO; the new algorithm is called K-means clustering gray wolf optimization (KMGWO).

Findings

Results illustrate the efficiency of KMGWO against to the GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve CEC2019 benchmark test functions.

Originality/value

Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA and GWO so KMGWO achieved the first rank in terms of performance. In addition, the KMGWO is used to solve a classical engineering problem and it is superior.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

To view the access options for this content please click here
Article
Publication date: 4 March 2021

Ravi Tej D, Sri Kavya Ch K and Sarat K. Kotamraju

The purpose of this paper is to improve energy efficiency and further reduction of side lobe level the algorithm proposed is firework algorithm. In this paper, roused by…

Abstract

Purpose

The purpose of this paper is to improve energy efficiency and further reduction of side lobe level the algorithm proposed is firework algorithm. In this paper, roused by the eminent swarm conduct of firecrackers, a novel multitude insight calculation called fireworks algorithm (FA) is proposed for work enhancement. The FA is introduced and actualized by mimicking the blast procedure of firecrackers. In the FA, two blast (search) forms are utilized and systems for keeping decent variety of sparkles are likewise all around planned. To approve the presentation of the proposed FA, correlation tests were led on nine benchmark test capacities among the FA, the standard PSO (SPSO) and the clonal PSO (CPSO).

Design/methodology/approach

The antenna arrays are used to improve the capacity and spectral efficiency of wireless communication system. The latest communication systems use the antenna array technology to improve the spectral efficiency, fill rate and the energy efficiency of the communication system can be enhanced. One of the most important properties of antenna array is beam pattern. A directional main lobe with low side lobe level (SLL) of the beam pattern will reduce the interference and enhance the quality of communication. The classical methods for reducing the side lobe level are differential evolution algorithm and PSO algorithm. In this paper, roused by the eminent swarm conduct of firecrackers, a novel multitude insight calculation called fireworks algorithm (FA) is proposed for work enhancement. The FA is introduced and actualized by mimicking the blast procedure of firecrackers. In the FA, two blast (search) forms are utilized and systems for keeping decent variety of sparkles are likewise all around planned. To approve the presentation of the proposed FA, correlation tests were led on nine benchmark test capacities among the FA, the standard PSO (SPSO) and the clonal PSO (CPSO). It is demonstrated that the FA plainly beats the SPSO and the CPSO in both enhancement exactness and combination speed. The results convey that the side lobe level is reduced to −34.78dB and fill rate is increased to 78.53.

Findings

Samples including 16-element LAAs are conducted to verify the optimization performances of the SLL reductions. Simulation results show that the SLLs can be effectively reduced by FA. Moreover, compared with other benchmark algorithms, fireworks has a better performance in terms of the accuracy, the convergence rate and the stability.

Research limitations/implications

With the use of algorithms radiation is prone to noise one way or other. Even with any optimizations we cannot expect radiation to be ideal. Power dissipation or electro magnetic interference is bound to happen, but the use of optimization algorithms tries to reduce them to the extent that is possible.

Practical implications

16-element linear antenna array is available with latest versions of Matlab.

Social implications

The latest technologies and emerging developments in the field of communication and with exponential growth in users the capacity of communication system has bottlenecks. The antenna arrays are used to improve the capacity and spectral efficiency of wireless communication system. The latest communication systems use the antenna array technology which is to improve the spectral efficiency, fill rate and the energy efficiency of the communication system can be enhanced.

Originality/value

By using FA, the fill rate is increased to 78.53 and the side lobe level is reduced to 35dB, when compared with the bench mark algorithms.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2021

Jie Lin and Minghua Wei

With the rapid development and stable operated application of lithium-ion batteries used in uninterruptible power supply (UPS), the prediction of remaining useful life…

Abstract

Purpose

With the rapid development and stable operated application of lithium-ion batteries used in uninterruptible power supply (UPS), the prediction of remaining useful life (RUL) for lithium-ion battery played an important role. More and more researchers paid more attentions on the reliability and safety for lithium-ion batteries based on prediction of RUL. The purpose of this paper is to predict the life of lithium-ion battery based on auto regression and particle filter method.

Design/methodology/approach

In this paper, a simple and effective RUL prediction method based on the combination method of auto-regression (AR) time-series model and particle filter (PF) was proposed for lithium-ion battery. The proposed method deformed the double-exponential empirical degradation model and reduced the number of parameters for such model to improve the efficiency of training. By using the PF algorithm to track the process of lithium-ion battery capacity decline and modified observations of the state space equations, the proposed PF + AR model fully considered the declined process of batteries to meet more accurate prediction of RUL.

Findings

Experiments on CALCE dataset have fully compared the conventional PF algorithm and the AR + PF algorithm both on original exponential empirical degradation model and the deformed double-exponential one. Experimental results have shown that the proposed PF + AR method improved the prediction accuracy, decreases the error rate and reduces the uncertainty ranges of RUL, which was more suitable for the deformed double-exponential empirical degradation model.

Originality/value

In the running of UPS device based on lithium-ion battery, the proposed AR + PF combination algorithm will quickly, accurately and robustly predict the RUL of lithium-ion batteries, which had a strong application value in the stable operation of laboratory and other application scenarios.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 18 February 2021

KS Resma, GS Sharvani and Ramasubbareddy Somula

Current industrial scenario is largely dependent on cloud computing paradigms. On-demand services provided by cloud data centre are paid as per use. Hence, it is very…

Abstract

Purpose

Current industrial scenario is largely dependent on cloud computing paradigms. On-demand services provided by cloud data centre are paid as per use. Hence, it is very important to make use of the allocated resources to the maximum. The resource utilization is highly dependent on the allocation of resources to the incoming request. The allocation of requests is done with respect to the physical machines present in the datacenter. While allocating the tasks to these physical machines, it needs to be allocated in such a way that no physical machine is underutilized or over loaded. To make sure of this, optimal load balancing is very important.

Design/methodology/approach

The paper proposes an algorithm which makes use of the fitness functions and duopoly game theory to allocate the tasks to the physical machines which can handle the resource requirement of the incoming tasks. The major focus of the proposed work is to optimize the load balancing in a datacenter. When optimization happens, none of the physical machine is neither overloaded nor under-utilized, hence resulting in efficient utilization of the resources.

Findings

The performance of the proposed algorithm is compared with different existing load balancing algorithms such as round-robin load (RR) ant colony optimization (ACO), artificial bee colony (ABC) with respect to the selected parameters response time, virtual machine migrations, host shut down and energy consumption. All the four parameters gave a positive result when the algorithm is simulated.

Originality/value

The contribution of this paper is towards the domain of cloud load balancing. The paper is proposing a novel approach to optimize the cloud load balancing process. The results obtained show that response time, virtual machine migrations, host shut down and energy consumption are reduced in comparison to few of the existing algorithms selected for the study. The proposed algorithm based on the duopoly function and fitness function brings in an optimized performance compared to the four algorithms analysed.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 September 2019

Yu Zhou

To plan the urban traffic path using the ant colony algorithm, the composition and functional division of the mobile robot are analyzed. The TSP (Traveling Salesman…

Abstract

To plan the urban traffic path using the ant colony algorithm, the composition and functional division of the mobile robot are analyzed. The TSP (Traveling Salesman Problem) is used to deeply understand the traditional ant colony algorithm. Then, based on this, the improvement scheme of the traditional ant colony algorithm is analyzed. The results showed that the artificial potential field method and the A* algorithm improved the performance of the ant colony algorithm. At the initial stage of the search path, the blindness and randomness of the ant colony algorithm due to insufficient pheromone concentration in each path were solved. The local optimal path is avoided with the development of algorithm iteration. Therefore, the improved ant colony algorithm is superior to the traditional ant colony algorithm.

Details

Open House International, vol. 44 no. 3
Type: Research Article
ISSN: 0168-2601

Keywords

To view the access options for this content please click here
Article
Publication date: 3 February 2021

Önder Özgür and Uğur Akkoç

The main purpose of this study is to forecast inflation rates in the case of the Turkish economy with shrinkage methods of machine learning algorithms.

Abstract

Purpose

The main purpose of this study is to forecast inflation rates in the case of the Turkish economy with shrinkage methods of machine learning algorithms.

Design/methodology/approach

This paper compares the predictive ability of a set of machine learning techniques (ridge, lasso, ada lasso and elastic net) and a group of benchmark specifications (autoregressive integrated moving average (ARIMA) and multivariate vector autoregression (VAR) models) on the extensive dataset.

Findings

Results suggest that shrinkage methods perform better for variable selection. It is also seen that lasso and elastic net algorithms outperform conventional econometric methods in the case of Turkish inflation. These algorithms choose the energy production variables, construction-sector measure, reel effective exchange rate and money market indicators as the most relevant variables for inflation forecasting.

Originality/value

Turkish economy that is a typical emerging country has experienced two digit and high volatile inflation regime starting with the year 2017. This study contributes to the literature by introducing the machine learning techniques to forecast inflation in the Turkish economy. The study also compares the relative performance of machine learning techniques and different conventional methods to predict inflation in the Turkish economy and provide the empirical methodology offering the best predictive performance among their counterparts.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

To view the access options for this content please click here
Article
Publication date: 22 January 2021

Fatemeh Daneshamooz, Parviz Fattahi and Seyed Mohammad Hassan Hosseini

Two-stage production systems including a processing shop and an assembly stage are widely used in various manufacturing industries. These two stages are usually studied…

Abstract

Purpose

Two-stage production systems including a processing shop and an assembly stage are widely used in various manufacturing industries. These two stages are usually studied independently which may not lead to ideal results. This paper aims to deal with a two-stage production system including a job shop and an assembly stage.

Design/methodology/approach

Some exact methods are proposed based on branch and bound (B&B) approach to minimize the total completion time of products. As B&B approaches are usually time-consuming, three efficient lower bounds are developed for the problem and variable neighborhood search is used to provide proper upper bound of the solution in each branch. In addition, to create branches and search new nodes, two strategies are applied including the best-first search and the depth-first search (DFS). Another feature of the proposed algorithms is that the search space is reduced by releasing the precedence constraint. In this case, the problem becomes equivalent to a parallel machine scheduling problem, and the redundant branches that do not consider the precedence constraint are removed. Therefore, the number of nodes and computational time are significantly reduced without eliminating the optimal solution.

Findings

Some numerical examples are used to evaluate the performance of the proposed methods. Comparison result to mathematical model (mixed-integer linear programming) validates the performance accuracy and efficiency of the proposed methods. In addition, computational results indicate the superiority of the DFS strategy with regard to CPU time.

Originality/value

Studies about the scheduling problems for two-stage production systems including job shop followed by an assembly stage traditionally present approximate method and metaheuristic algorithms to solve the problem. This is the first study that introduces exact methods based on (B&B) approach.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2021

Junying Chen, Zhanshe Guo, Fuqiang Zhou, Jiangwen Wan and Donghao Wang

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering…

Abstract

Purpose

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based on double sparse structure dictionary learning (DSSDL). The purpose of this paper is to reduce the energy consumption of WSNs.

Design/methodology/approach

The historical data is used to construct a sparse representation base. In the dictionary-learning stage, the sparse representation matrix is decomposed into the product of double sparse matrices. Then, in the update stage of the dictionary, the sparse representation matrix is orthogonalized and unitized. The finally obtained double sparse structure dictionary is applied to the compressive data gathering in WSNs.

Findings

The dictionary obtained by the proposed algorithm has better sparse representation ability. The experimental results show that, the sparse representation error can be reduced by at least 3.6% compared with other dictionaries. In addition, the better sparse representation ability makes the WSNs achieve less measurement times under the same accuracy of data gathering, which means more energy saving. According to the results of simulation, the proposed algorithm can reduce the energy consumption by at least 2.7% compared with other compressive data-gathering methods under the same data-gathering accuracy.

Originality/value

In this paper, the double sparse structure dictionary is introduced into the compressive data-gathering algorithm in WSNs. The experimental results indicate that the proposed algorithm has good performance on energy consumption and sparse representation.

Details

Sensor Review, vol. 41 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 11 February 2021

Krithiga R. and Ilavarasan E.

The purpose of this paper is to enhance the performance of spammer identification problem in online social networks. Hyperparameter tuning has been performed by…

Abstract

Purpose

The purpose of this paper is to enhance the performance of spammer identification problem in online social networks. Hyperparameter tuning has been performed by researchers in the past to enhance the performance of classifiers. The AdaBoost algorithm belongs to a class of ensemble classifiers and is widely applied in binary classification problems. A single algorithm may not yield accurate results. However, an ensemble of classifiers built from multiple models has been successfully applied to solve many classification tasks. The search space to find an optimal set of parametric values is vast and so enumerating all possible combinations is not feasible. Hence, a hybrid modified whale optimization algorithm for spam profile detection (MWOA-SPD) model is proposed to find optimal values for these parameters.

Design/methodology/approach

In this work, the hyperparameters of AdaBoost are fine-tuned to find its application to identify spammers in social networks. AdaBoost algorithm linearly combines several weak classifiers to produce a stronger one. The proposed MWOA-SPD model hybridizes the whale optimization algorithm and salp swarm algorithm.

Findings

The technique is applied to a manually constructed Twitter data set. It is compared with the existing optimization and hyperparameter tuning methods. The results indicate that the proposed method outperforms the existing techniques in terms of accuracy and computational efficiency.

Originality/value

The proposed method reduces the server load by excluding complex features retaining only the lightweight features. It aids in identifying the spammers at an earlier stage thereby offering users a propitious environment.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 11 February 2021

Meeta Sharma and Hardayal Singh Shekhawat

The purpose of this study is to provide a novel portfolio asset prediction by means of the modified deep learning and hybrid meta-heuristic concept. In the past few years…

Abstract

Purpose

The purpose of this study is to provide a novel portfolio asset prediction by means of the modified deep learning and hybrid meta-heuristic concept. In the past few years, portfolio optimization has appeared as a demanding and fascinating multi-objective problem, in the area of computational finance. Yet, it is accepting the growing attention of fund management companies, researchers and individual investors. The primary issues in portfolio selection are the choice of a subset of assets and its related optimal weights of every chosen asset. The composition of every asset is chosen in a manner such that the total profit or return of the portfolio is improved thereby reducing the risk at the same time.

Design/methodology/approach

This paper provides a novel portfolio asset prediction using the modified deep learning concept. For implementing this framework, a set of data involving the portfolio details of different companies for certain duration is selected. The proposed model involves two main phases. One is to predict the future state or profit of every company, and the other is to select the company which is giving maximum profit in the future. In the first phase, a deep learning model called recurrent neural network (RNN) is used for predicting the future condition of the entire companies taken in the data set and thus creates the data library. Once the forecasting of the data is done, the selection of companies for the portfolio is done using a hybrid optimization algorithm by integrating Jaya algorithm (JA) and spotted hyena optimization (SHO) termed as Jaya-based spotted hyena optimization (J-SHO). This optimization model tries to get the optimal solution including which company has to be selected, and optimized RNN helps to predict the future return while using those companies. The main objective model of the J-SHO-based RNN is to maximize the prediction accuracy and J-SHO-based portfolio asset selection is to maximize the profit. Extensive experiments on the benchmark datasets from real-world stock markets with diverse assets in various time periods shows that the developed model outperforms other state-of-the-art strategies proving its efficiency in portfolio optimization.

Findings

From the analysis, the profit analysis of proposed J-SHO for predicting after 7 days in next month was 46.15% better than particle swarm optimization (PSO), 18.75% better than grey wolf optimization (GWO), 35.71% better than whale optimization algorithm (WOA), 5.56% superior to JA and 35.71% superior to SHO. Therefore, it can be certified that the proposed J-SHO was effective in providing intelligent portfolio asset selection and prediction when compared with the conventional methods.

Originality/value

This paper presents a technique for providing a novel portfolio asset prediction using J-SHO algorithm. This is the first work uses J-SHO-based optimization for providing a novel portfolio asset prediction using the modified deep learning concept.

1 – 10 of over 31000