Search results
1 – 10 of 15Sonalika Mishra, Suchismita Patel, Ramesh Chandra Prusty and Sidhartha Panda
This paper aims to implement a maiden methodology for load frequency control of an AC multi micro-grid (MG) by using hybrid fractional order fuzzy PID (FOFPID) controller and…
Abstract
Purpose
This paper aims to implement a maiden methodology for load frequency control of an AC multi micro-grid (MG) by using hybrid fractional order fuzzy PID (FOFPID) controller and linear quadratic Gaussian (LQG).
Design/methodology/approach
The multi MG system considered is consisting of photovoltaic, wind turbine and a synchronous generator. Different energy storage devices i.e. battery energy storage system and flywheel energy storage system are also integrated to the system. The renewable energy sources suffer from uncertainty and fluctuation from their nominal values, which results in fluctuation of system frequency. Inspired by this difficulty in MG control, this research paper proposes a hybridized FOFPID and LQG controller under random and stochastic environments. Again to confer viability of proposed controller its performances are compared with PID, fuzzy PID and fuzzy PID-LQG controllers. A comparative study among all implemented techniques i.e. proposed multi-verse optimization (MVO) algorithm, particle swarm optimization and genetic algorithm has been done to justify the supremacy of MVO algorithm. To check the robustness of the controller sensitivity analysis is done.
Findings
The merged concept of fractional calculus and state feedback theory is found to be efficient. The designed controller is found to be capable of rejecting the effect of disturbances present in the system.
Originality/value
From the study, the authors observed that the proposed hybrid FOPID and LQG controller is robust hence, there is no need to reset the controller parameters with a large change in network parameters.
Details
Keywords
Hanuman Reddy N., Amit Lathigara, Rajanikanth Aluvalu and Uma Maheswari V.
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational…
Abstract
Purpose
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational resources to requests that have a high volume of pending processing. CC relies on load balancing to ensure that resources like servers and virtual machines (VMs) running on real servers share the same amount of load. VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data.
Design/methodology/approach
VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data. With a large number of VM or jobs, this method has a long makespan and is very difficult. A new idea to cloud loads without decreasing implementation time or resource consumption is therefore encouraged. Equilibrium optimization is used to cluster the VM into underloaded and overloaded VMs initially in this research. Underloading VMs is used to improve load balance and resource utilization in the second stage. The hybrid algorithm of BAT and the artificial bee colony (ABC) helps with TS using a multi-objective-based system. The VM manager performs VM migration decisions to provide load balance among physical machines (PMs). When a PM is overburdened and another PM is underburdened, the decision to migrate VMs is made based on the appropriate conditions. Balanced load and reduced energy usage in PMs are achieved in the former case. Manta ray foraging (MRF) is used to migrate VMs, and its decisions are based on a variety of factors.
Findings
The proposed approach provides the best possible scheduling for both VMs and PMs. To complete the task, improved whale optimization algorithm for Cloud TS has 42 s of completion time, enhanced multi-verse optimizer has 48 s, hybrid electro search with a genetic algorithm has 50 s, adaptive benefit factor-based symbiotic organisms search has 38 s and, finally, the proposed model has 30 s, which shows better performance of the proposed model.
Originality/value
User’s request or data transmission in a cloud data centre may cause the VMs to be under or overloaded with data. To identify the load on VM, initially EQ algorithm is used for clustering process. To figure out how well the proposed method works when the system is very busy by implementing hybrid algorithm called BAT–ABC. After the TS process, VM migration is occurred at the final stage, where optimal VM is identified by using MRF algorithm. The experimental analysis is carried out by using various metrics such as execution time, transmission time, makespan for various iterations, resource utilization and load fairness. With its system load, the metric gives load fairness. How load fairness is worked out depends on how long each task takes to do. It has been added that a cloud system may be able to achieve more load fairness if tasks take less time to finish.
Details
Keywords
Oluwafemi Ajayi and Reolyn Heymann
Energy management is critical to data centres (DCs) majorly because they are high energy-consuming facilities and demand for their services continue to rise due to rapidly…
Abstract
Purpose
Energy management is critical to data centres (DCs) majorly because they are high energy-consuming facilities and demand for their services continue to rise due to rapidly increasing global demand for cloud services and other technological services. This projected sectoral growth is expected to translate into increased energy demand from the sector, which is already considered a major energy consumer unless innovative steps are used to drive effective energy management systems. The purpose of this study is to provide insights into the expected energy demand of the DC and the impact each measured parameter has on the building's energy demand profile. This serves as a basis for the design of an effective energy management system.
Design/methodology/approach
This study proposes novel tunicate swarm algorithm (TSA) for training an artificial neural network model used for predicting the energy demand of a DC. The objective is to find the optimal weights and biases of the model while avoiding commonly faced challenges when using the backpropagation algorithm. The model implementation is based on historical energy consumption data of an anonymous DC operator in Cape Town, South Africa. The data set provided consists of variables such as ambient temperature, ambient relative humidity, chiller output temperature and computer room air conditioning air supply temperature, which serve as inputs to the neural network that is designed to predict the DC’s hourly energy consumption for July 2020. Upon preprocessing of the data set, total sample number for each represented variable was 464. The 80:20 splitting ratio was used to divide the data set into training and testing set respectively, making 452 samples for the training set and 112 samples for the testing set. A weights-based approach has also been used to analyze the relative impact of the model’s input parameters on the DC’s energy demand pattern.
Findings
The performance of the proposed model has been compared with those of neural network models trained using state of the art algorithms such as moth flame optimization, whale optimization algorithm and ant lion optimizer. From analysis, it was found that the proposed TSA outperformed the other methods in training the model based on their mean squared error, root mean squared error, mean absolute error, mean absolute percentage error and prediction accuracy. Analyzing the relative percentage contribution of the model's input parameters based on the weights of the neural network also shows that the ambient temperature of the DC has the highest impact on the building’s energy demand pattern.
Research limitations/implications
The proposed novel model can be applied to solving other complex engineering problems such as regression and classification. The methodology for optimizing the multi-layered perceptron neural network can also be further applied to other forms of neural networks for improved performance.
Practical implications
Based on the forecasted energy demand of the DC and an understanding of how the input parameters impact the building's energy demand pattern, neural networks can be deployed to optimize the cooling systems of the DC for reduced energy cost.
Originality/value
The use of TSA for optimizing the weights and biases of a neural network is a novel study. The application context of this study which is DCs is quite untapped in the literature, leaving many gaps for further research. The proposed prediction model can be further applied to other regression tasks and classification tasks. Another contribution of this study is the analysis of the neural network's input parameters, which provides insight into the level to which each parameter influences the DC’s energy demand profile.
Details
Keywords
Metaheuristic algorithms have been commonly used as an optimisation tool in various fields. However, optimisation of real-world problems has become increasingly challenging with…
Abstract
Purpose
Metaheuristic algorithms have been commonly used as an optimisation tool in various fields. However, optimisation of real-world problems has become increasingly challenging with to increase in system complexity. This situation has become a pull factor to introduce an efficient metaheuristic. This study aims to propose a novel sport-inspired algorithm based on a football playing style called tiki-taka.
Design/methodology/approach
The tiki-taka football style is characterised by short passing, player positioning and maintaining possession. This style aims to dominate the ball possession and defeat opponents using its tactical superiority. The proposed tiki-taka algorithm (TTA) simulates the short passing and player positioning behaviour for optimisation. The algorithm was tested using 19 benchmark functions and five engineering design problems. The performance of the proposed algorithm was compared with 11 other metaheuristics from sport-based, highly cited and recent algorithms.
Findings
The results showed that the TTA is extremely competitive, ranking first and second on 84% of benchmark problems. The proposed algorithm performs best in two engineering design problems and ranks second in the three remaining problems.
Originality/value
The originality of the proposed algorithm is the short passing strategy that exploits a nearby player to move to a better position.
Details
Keywords
Amin Farzin, Mehrangiz Ghazi, Amir Farhang Sotoodeh and Mohammad Nikian
The purpose of this study is to provide a method for designing the shell and tube heat exchangers and examine the total annual cost of heat exchanger networks from the economic…
Abstract
Purpose
The purpose of this study is to provide a method for designing the shell and tube heat exchangers and examine the total annual cost of heat exchanger networks from the economic view based on the careful design of equipment.
Design/methodology/approach
Accurate evaluation of heat exchanger networks performance depends on detailed models of heat exchangers design. The simulations variables include nine design variables such as flow direction determination of each of the two fluids, number of tubes, number of tube passes, length of tubes, the arrangement of tubes, size and percentage of baffle cut, tube diameter and tube pitch. The optimal designing of the heat exchangers is based on geometrical and hydraulic modeling and using a hybrid genetic particle swarm optimization algorithm (PSO-GA) technique. In this paper, optimization and minimization of the total annual cost of heat exchanger networks are considered as the objective function.
Findings
In this study, a fast and reliable method is used to simulate, optimize design parameters and evaluate heat transfer enhancement. PSO-GA algorithms have been used to minimize the total annual cost, which includes investment costs of heat exchangers and pumps, operating costs (pumping) and energy costs for utilities. Three case studies of four, six and nine streams are selected to demonstrate the accuracy of the method. Reductions of 0.55%, 23.5% and 14.78% are obtained in total annual cost for the selected streams, respectively.
Originality/value
In the present study, a reliable method is used to simulate and optimize design parameters and the economic optimization of the heat exchanger networks. Taking into account the importance of shell and tube heat exchangers in industrial applications and the complexity in their geometry, the PSO-GA methodology is adopted to obtain an optimal geometric configuration. The total annual cost is chosen as the objective function. Applying this technique to case studies demonstrates its ability to accurately design heat exchangers to optimize the objective function of the heat exchanger networks by giving the detail of design.
Details
Keywords
Sandeep Kumar Hegde and Monica R. Mundada
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio…
Abstract
Purpose
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio vasculardisease (CVD) and chronic kidney disease (CKD) are major chronic diseases responsible for millions of death. Each of these diseases is considered as a risk factor for the other two diseases. Therefore, noteworthy attention is being paid to reduce the risk of these diseases. A gigantic amount of medical data is generated in digital form from smart healthcare appliances in the current era. Although numerous machine learning (ML) algorithms are proposed for the early prediction of chronic diseases, these algorithmic models are neither generalized nor adaptive when the model is imposed on new disease datasets. Hence, these algorithms have to process a huge amount of disease data iteratively until the model converges. This limitation may make it difficult for ML models to fit and produce imprecise results. A single algorithm may not yield accurate results. Nonetheless, an ensemble of classifiers built from multiple models, that works based on a voting principle has been successfully applied to solve many classification tasks. The purpose of this paper is to make early prediction of chronic diseases using hybrid generative regression based deep intelligence network (HGRDIN) model.
Design/methodology/approach
In the proposed paper generative regression (GR) model is used in combination with deep neural network (DNN) for the early prediction of chronic disease. The GR model will obtain prior knowledge about the labelled data by analyzing the correlation between features and class labels. Hence, the weight assignment process of DNN is influenced by the relationship between attributes rather than random assignment. The knowledge obtained through these processes is passed as input to the DNN network for further prediction. Since the inference about the input data instances is drawn at the DNN through the GR model, the model is named as hybrid generative regression-based deep intelligence network (HGRDIN).
Findings
The credibility of the implemented approach is rigorously validated using various parameters such as accuracy, precision, recall, F score and area under the curve (AUC) score. During the training phase, the proposed algorithm is constantly regularized using the elastic net regularization technique and also hyper-tuned using the various parameters such as momentum and learning rate to minimize the misprediction rate. The experimental results illustrate that the proposed approach predicted the chronic disease with a minimal error by avoiding the possible overfitting and local minima problems. The result obtained with the proposed approach is also compared with the various traditional approaches.
Research limitations/implications
Usually, the diagnostic data are multi-dimension in nature where the performance of the ML algorithm will degrade due to the data overfitting, curse of dimensionality issues. The result obtained through the experiment has achieved an average accuracy of 95%. Hence, analysis can be made further to improve predictive accuracy by overcoming the curse of dimensionality issues.
Practical implications
The proposed ML model can mimic the behavior of the doctor's brain. These algorithms have the capability to replace clinical tasks. The accurate result obtained through the innovative algorithms can free the physician from the mundane care and practices so that the physician can focus more on the complex issues.
Social implications
Utilizing the proposed predictive model at the decision-making level for the early prediction of the disease is considered as a promising change towards the healthcare sector. The global burden of chronic disease can be reduced at an exceptional level through these approaches.
Originality/value
In the proposed HGRDIN model, the concept of transfer learning approach is used where the knowledge acquired through the GR process is applied on DNN that identified the possible relationship between the dependent and independent feature variables by mapping the chronic data instances to its corresponding target class before it is being passed as input to the DNN network. Hence, the result of the experiments illustrated that the proposed approach obtained superior performance in terms of various validation parameters than the existing conventional techniques.
Details
Keywords
Sajad Ahmad Rather and P. Shanthi Bala
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been…
Abstract
Purpose
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been employed for training MLP to overcome sensitivity to initialization, premature convergence, and stagnation in local optima problems of MLP.
Design/methodology/approach
In this study, the exploration of the search space is carried out by gravitational search algorithm (GSA) and optimization of candidate solutions, i.e. exploitation is performed by particle swarm optimization (PSO). For training the multi-layer perceptron (MLP), CPSOGSA uses sigmoid fitness function for finding the proper combination of connection weights and neural biases to minimize the error. Secondly, a matrix encoding strategy is utilized for providing one to one correspondence between weights and biases of MLP and agents of CPSOGSA.
Findings
The experimental findings convey that CPSOGSA is a better MLP trainer as compared to other stochastic algorithms because it provides superior results in terms of resolving stagnation in local optima and convergence speed problems. Besides, it gives the best results for breast cancer, heart, sine function and sigmoid function datasets as compared to other participating algorithms. Moreover, CPSOGSA also provides very competitive results for other datasets.
Originality/value
The CPSOGSA performed effectively in overcoming stagnation in local optima problem and increasing the overall convergence speed of MLP. Basically, CPSOGSA is a hybrid optimization algorithm which has powerful characteristics of global exploration capability and high local exploitation power. In the research literature, a little work is available where CPSO and GSA have been utilized for training MLP. The only related research paper was given by Mirjalili et al., in 2012. They have used standard PSO and GSA for training simple FNNs. However, the work employed only three datasets and used the MSE performance metric for evaluating the efficiency of the algorithms. In this paper, eight different standard datasets and five performance metrics have been utilized for investigating the efficiency of CPSOGSA in training MLPs. In addition, a non-parametric pair-wise statistical test namely the Wilcoxon rank-sum test has been carried out at a 5% significance level to statistically validate the simulation results. Besides, eight state-of-the-art meta-heuristic algorithms were employed for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.
Details
Keywords
Ranjitha K., Sivakumar P. and Monica M.
This study aims to implement an improved version of the Chimp algorithm (IChimp) for load frequency control (LFC) of power system.
Abstract
Purpose
This study aims to implement an improved version of the Chimp algorithm (IChimp) for load frequency control (LFC) of power system.
Design/methodology/approach
This work was adopted by IChimp to optimize proportional integral derivative (PID) controller parameters used for the LFC of a two area interconnected thermal system.
Findings
The supremacy of proposed IChimp tuned PID controller over Chimp optimization, direct synthesis-based PID controller, internal model controller tuned PID controller and recent algorithm based PID controller was demonstrated.
Originality/value
IChimp has good convergence and better search ability. The IChimp optimized PID controller is the proposed controlling method, which ensured better performance in terms of converging behaviour, optimizing controller gains and steady-state response.
Details
Keywords
Mohd Fadzil Faisae Ab. Rashid and Ariff Nijay Ramli
This study aims to propose a new multiobjective optimization metaheuristic based on the tiki-taka algorithm (TTA). The proposed multiobjective TTA (MOTTA) was implemented for a…
Abstract
Purpose
This study aims to propose a new multiobjective optimization metaheuristic based on the tiki-taka algorithm (TTA). The proposed multiobjective TTA (MOTTA) was implemented for a simple assembly line balancing type E (SALB-E), which aimed to minimize the cycle time and workstation number simultaneously.
Design/methodology/approach
TTA is a new metaheuristic inspired by the tiki-taka playing style in a football match. The TTA is previously designed for a single-objective optimization, but this study extends TTA into a multiobjective optimization. The MOTTA mimics the short passing and player movement in tiki-taka to control the game. The algorithm also utilizes unsuccessful ball pass and multiple key players to enhance the exploration. MOTTA was tested against popular CEC09 benchmark functions.
Findings
The computational experiments indicated that MOTTA had better results in 82% of the cases from the CEC09 benchmark functions. In addition, MOTTA successfully found 83.3% of the Pareto optimal solution in the SALB-E optimization and showed tremendous performance in the spread and distribution indicators, which were associated with the multiple key players in the algorithm.
Originality/value
MOTTA exploits the information from all players to move to a new position. The algorithm makes all solution candidates have contributions to the algorithm convergence.
Details
Keywords
Yongquan Zhou, Ying Ling and Qifang Luo
This paper aims to represent an improved whale optimization algorithm (WOA) based on a Lévy flight trajectory and called the LWOA algorithm to solve engineering optimization…
Abstract
Purpose
This paper aims to represent an improved whale optimization algorithm (WOA) based on a Lévy flight trajectory and called the LWOA algorithm to solve engineering optimization problems. The LWOA makes the WOA faster, more robust and significantly enhances the WOA. In the LWOA, the Lévy flight trajectory enhances the capability of jumping out of the local optima and is helpful for smoothly balancing exploration and exploitation of the WOA. It has been successfully applied to five standard engineering optimization problems. The simulation results of the classical engineering design problems and real application exhibit the superiority of the LWOA algorithm in solving challenging problems with constrained and unknown search spaces when compared to the basic WOA algorithm or other available solutions.
Design/methodology/approach
In this paper, an improved WOA based on a Lévy flight trajectory and called the LWOA algorithm is represented to solve engineering optimization problems.
Findings
It has been successfully applied to five standard engineering optimization problems. The simulation results of the classical engineering design problems and real application exhibit the superiority of the LWOA algorithm in solving challenging problems with constrained and unknown search spaces when compared to the basic WOA algorithm or other available solutions.
Originality value
An improved WOA based on a Lévy flight trajectory and called the LWOA algorithm is first proposed.
Details