Search results
1 – 10 of 16Sonalika Mishra, Suchismita Patel, Ramesh Chandra Prusty and Sidhartha Panda
This paper aims to implement a maiden methodology for load frequency control of an AC multi micro-grid (MG) by using hybrid fractional order fuzzy PID (FOFPID) controller and…
Abstract
Purpose
This paper aims to implement a maiden methodology for load frequency control of an AC multi micro-grid (MG) by using hybrid fractional order fuzzy PID (FOFPID) controller and linear quadratic Gaussian (LQG).
Design/methodology/approach
The multi MG system considered is consisting of photovoltaic, wind turbine and a synchronous generator. Different energy storage devices i.e. battery energy storage system and flywheel energy storage system are also integrated to the system. The renewable energy sources suffer from uncertainty and fluctuation from their nominal values, which results in fluctuation of system frequency. Inspired by this difficulty in MG control, this research paper proposes a hybridized FOFPID and LQG controller under random and stochastic environments. Again to confer viability of proposed controller its performances are compared with PID, fuzzy PID and fuzzy PID-LQG controllers. A comparative study among all implemented techniques i.e. proposed multi-verse optimization (MVO) algorithm, particle swarm optimization and genetic algorithm has been done to justify the supremacy of MVO algorithm. To check the robustness of the controller sensitivity analysis is done.
Findings
The merged concept of fractional calculus and state feedback theory is found to be efficient. The designed controller is found to be capable of rejecting the effect of disturbances present in the system.
Originality/value
From the study, the authors observed that the proposed hybrid FOPID and LQG controller is robust hence, there is no need to reset the controller parameters with a large change in network parameters.
Details
Keywords
Rafi Vempalle and Dhal Pradyumna Kumar
The demand for electricity supply increases day by day due to the rapid growth in the number of industries and consumer devices. The electric power supply needs to be improved by…
Abstract
Purpose
The demand for electricity supply increases day by day due to the rapid growth in the number of industries and consumer devices. The electric power supply needs to be improved by properly arranging distributed generators (DGs). The purpose of this paper is to develop a methodology for optimum placement of DGs using novel algorithms that leads to loss minimization.
Design/methodology/approach
In this paper, a novel hybrid optimization is proposed to minimize the losses and improve the voltage profile. The hybridization of the optimization is done through the crow search (CS) algorithm and the black widow (BW) algorithm. The CS algorithm is used for finding some tie-line systems, DG locations, and the BW algorithm is used for finding the rest of the tie-line switches, DG sizes, unlike in usual hybrid optimization techniques.
Findings
The proposed technique is tested on two large-scale radial distribution networks (RDNs), like the 119-bus radial distribution system (RDS) and the 135 RDS, and compared with normal hybrid algorithms.
Originality/value
The main novelty of this hybridization is that it shares the parameters of the objective function. The losses of the RDN can be minimized by reconfiguration and incorporating compensating devices like DGs.
Details
Keywords
Amir Hossein Hosseinian and Vahid Baradaran
The purpose of this research is to study the Multi-Skill Resource-Constrained Multi-Project Scheduling Problem (MSRCMPSP), where (1) durations of activities depend on the…
Abstract
Purpose
The purpose of this research is to study the Multi-Skill Resource-Constrained Multi-Project Scheduling Problem (MSRCMPSP), where (1) durations of activities depend on the familiarity levels of assigned workers, (2) more efficient workers demand higher per-day salaries, (3) projects have different due dates and (4) the budget of each period varies over time. The proposed model is bi-objective, and its objectives are minimization of completion times and costs of all projects, simultaneously.
Design/methodology/approach
This paper proposes a two-phase approach based on the Statistical Process Control (SPC) to solve this problem. This approach aims to develop a control chart so as to monitor the performance of an optimizer during the optimization process. In the first phase, a multi-objective statistical model has been used to obtain control limits of this chart. To solve this model, a Multi-Objective Greedy Randomized Adaptive Search Procedure (MOGRASP) has been hired. In the second phase, the MSRCMPSP is solved via a New Version of the Multi-Objective Variable Neighborhood Search Algorithm (NV-MOVNS). In each iteration, the developed control chart monitors the performance of the NV-MOVNS to obtain proper solutions. When the control chart warns about an out-of control state, a new procedure based on the Conway’s Game of Life, which is a cellular automaton, is used to bring the algorithm back to the in-control state.
Findings
The proposed two-phase approach has been used in solving several standard test problems available in the literature. The results are compared with the outputs of some other methods to assess the efficiency of this approach. Comparisons imply the high efficiency of the proposed approach in solving test problems with different sizes.
Practical implications
The proposed model and approach have been used to schedule multiple projects of a construction company in Iran. The outputs show that both the model and the NV-MOVNS can be used in real-world multi-project scheduling problems.
Originality/value
Due to the numerous numbers of studies reviewed in this research, the authors discovered that there are few researches on the multi-skill resource-constrained multi-project scheduling problem (MSRCMPSP) with the aforementioned characteristics. Moreover, none of the previous researches proposed an SPC-based solution approach for meta-heuristics in order to solve the MSRCMPSP.
Details
Keywords
Guanxiong Wang, Xiaojian Hu and Ting Wang
By introducing the mass customization service mode into the cloud logistics environment, this paper studies the joint optimization of service provider selection and customer order…
Abstract
Purpose
By introducing the mass customization service mode into the cloud logistics environment, this paper studies the joint optimization of service provider selection and customer order decoupling point (CODP) positioning based on the mass customization service mode to provide customers with more diversified and personalized service content with lower total logistics service cost.
Design/methodology/approach
This paper addresses the general process of service composition optimization based on the mass customization mode in a cloud logistics service environment and constructs a joint decision model for service provider selection and CODP positioning. In the model, the two objective functions of minimum service cost and most satisfactory delivery time are considered, and the Pareto optimal solution of the model is obtained via the NSGA-II algorithm. Then, a numerical case is used to verify the superiority of the service composition scheme based on the mass customization mode over the general scheme and to verify the significant impact of the scale effect coefficient on the optimal CODP location.
Findings
(1) Under the cloud logistics mode, the implementation of the logistics service mode based on mass customization can not only reduce the total cost of logistics services by means of the scale effect of massive orders on the cloud platform but also make more efficient use of a large number of logistics service providers gathered on the cloud platform to provide customers with more customized and diversified service content. (2) The scale effect coefficient directly affects the total cost of logistics services and significantly affects the location of the CODP. Therefore, before implementing the mass customization logistics service mode, the most reasonable clustering of orders on the cloud logistics platform is very important for the follow-up service combination.
Originality/value
The originality of this paper includes two aspects. One is to introduce the mass customization mode in the cloud logistics service environment for the first time and summarize the operation process of implementing the mass customization mode in the cloud logistics environment. Second, in order to solve the joint decision optimization model of provider selection and CODP positioning, this paper designs a method for solving a mixed-integer nonlinear programming model using a multi-layer coding genetic algorithm.
Details
Keywords
Hanuman Reddy N., Amit Lathigara, Rajanikanth Aluvalu and Uma Maheswari V.
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational…
Abstract
Purpose
Cloud computing (CC) refers to the usage of virtualization technology to share computing resources through the internet. Task scheduling (TS) is used to assign computational resources to requests that have a high volume of pending processing. CC relies on load balancing to ensure that resources like servers and virtual machines (VMs) running on real servers share the same amount of load. VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data.
Design/methodology/approach
VMs are an important part of virtualization, where physical servers are transformed into VM and act as physical servers during the process. It is possible that a user’s request or data transmission in a cloud data centre may be the reason for the VM to be under or overloaded with data. With a large number of VM or jobs, this method has a long makespan and is very difficult. A new idea to cloud loads without decreasing implementation time or resource consumption is therefore encouraged. Equilibrium optimization is used to cluster the VM into underloaded and overloaded VMs initially in this research. Underloading VMs is used to improve load balance and resource utilization in the second stage. The hybrid algorithm of BAT and the artificial bee colony (ABC) helps with TS using a multi-objective-based system. The VM manager performs VM migration decisions to provide load balance among physical machines (PMs). When a PM is overburdened and another PM is underburdened, the decision to migrate VMs is made based on the appropriate conditions. Balanced load and reduced energy usage in PMs are achieved in the former case. Manta ray foraging (MRF) is used to migrate VMs, and its decisions are based on a variety of factors.
Findings
The proposed approach provides the best possible scheduling for both VMs and PMs. To complete the task, improved whale optimization algorithm for Cloud TS has 42 s of completion time, enhanced multi-verse optimizer has 48 s, hybrid electro search with a genetic algorithm has 50 s, adaptive benefit factor-based symbiotic organisms search has 38 s and, finally, the proposed model has 30 s, which shows better performance of the proposed model.
Originality/value
User’s request or data transmission in a cloud data centre may cause the VMs to be under or overloaded with data. To identify the load on VM, initially EQ algorithm is used for clustering process. To figure out how well the proposed method works when the system is very busy by implementing hybrid algorithm called BAT–ABC. After the TS process, VM migration is occurred at the final stage, where optimal VM is identified by using MRF algorithm. The experimental analysis is carried out by using various metrics such as execution time, transmission time, makespan for various iterations, resource utilization and load fairness. With its system load, the metric gives load fairness. How load fairness is worked out depends on how long each task takes to do. It has been added that a cloud system may be able to achieve more load fairness if tasks take less time to finish.
Details
Keywords
Metaheuristic algorithms have been commonly used as an optimisation tool in various fields. However, optimisation of real-world problems has become increasingly challenging with…
Abstract
Purpose
Metaheuristic algorithms have been commonly used as an optimisation tool in various fields. However, optimisation of real-world problems has become increasingly challenging with to increase in system complexity. This situation has become a pull factor to introduce an efficient metaheuristic. This study aims to propose a novel sport-inspired algorithm based on a football playing style called tiki-taka.
Design/methodology/approach
The tiki-taka football style is characterised by short passing, player positioning and maintaining possession. This style aims to dominate the ball possession and defeat opponents using its tactical superiority. The proposed tiki-taka algorithm (TTA) simulates the short passing and player positioning behaviour for optimisation. The algorithm was tested using 19 benchmark functions and five engineering design problems. The performance of the proposed algorithm was compared with 11 other metaheuristics from sport-based, highly cited and recent algorithms.
Findings
The results showed that the TTA is extremely competitive, ranking first and second on 84% of benchmark problems. The proposed algorithm performs best in two engineering design problems and ranks second in the three remaining problems.
Originality/value
The originality of the proposed algorithm is the short passing strategy that exploits a nearby player to move to a better position.
Details
Keywords
Sajad Ahmad Rather and P. Shanthi Bala
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been…
Abstract
Purpose
In this paper, a newly proposed hybridization algorithm namely constriction coefficient-based particle swarm optimization and gravitational search algorithm (CPSOGSA) has been employed for training MLP to overcome sensitivity to initialization, premature convergence, and stagnation in local optima problems of MLP.
Design/methodology/approach
In this study, the exploration of the search space is carried out by gravitational search algorithm (GSA) and optimization of candidate solutions, i.e. exploitation is performed by particle swarm optimization (PSO). For training the multi-layer perceptron (MLP), CPSOGSA uses sigmoid fitness function for finding the proper combination of connection weights and neural biases to minimize the error. Secondly, a matrix encoding strategy is utilized for providing one to one correspondence between weights and biases of MLP and agents of CPSOGSA.
Findings
The experimental findings convey that CPSOGSA is a better MLP trainer as compared to other stochastic algorithms because it provides superior results in terms of resolving stagnation in local optima and convergence speed problems. Besides, it gives the best results for breast cancer, heart, sine function and sigmoid function datasets as compared to other participating algorithms. Moreover, CPSOGSA also provides very competitive results for other datasets.
Originality/value
The CPSOGSA performed effectively in overcoming stagnation in local optima problem and increasing the overall convergence speed of MLP. Basically, CPSOGSA is a hybrid optimization algorithm which has powerful characteristics of global exploration capability and high local exploitation power. In the research literature, a little work is available where CPSO and GSA have been utilized for training MLP. The only related research paper was given by Mirjalili et al., in 2012. They have used standard PSO and GSA for training simple FNNs. However, the work employed only three datasets and used the MSE performance metric for evaluating the efficiency of the algorithms. In this paper, eight different standard datasets and five performance metrics have been utilized for investigating the efficiency of CPSOGSA in training MLPs. In addition, a non-parametric pair-wise statistical test namely the Wilcoxon rank-sum test has been carried out at a 5% significance level to statistically validate the simulation results. Besides, eight state-of-the-art meta-heuristic algorithms were employed for comparative analysis of the experimental results to further raise the authenticity of the experimental setup.
Details
Keywords
Amin Farzin, Mehrangiz Ghazi, Amir Farhang Sotoodeh and Mohammad Nikian
The purpose of this study is to provide a method for designing the shell and tube heat exchangers and examine the total annual cost of heat exchanger networks from the economic…
Abstract
Purpose
The purpose of this study is to provide a method for designing the shell and tube heat exchangers and examine the total annual cost of heat exchanger networks from the economic view based on the careful design of equipment.
Design/methodology/approach
Accurate evaluation of heat exchanger networks performance depends on detailed models of heat exchangers design. The simulations variables include nine design variables such as flow direction determination of each of the two fluids, number of tubes, number of tube passes, length of tubes, the arrangement of tubes, size and percentage of baffle cut, tube diameter and tube pitch. The optimal designing of the heat exchangers is based on geometrical and hydraulic modeling and using a hybrid genetic particle swarm optimization algorithm (PSO-GA) technique. In this paper, optimization and minimization of the total annual cost of heat exchanger networks are considered as the objective function.
Findings
In this study, a fast and reliable method is used to simulate, optimize design parameters and evaluate heat transfer enhancement. PSO-GA algorithms have been used to minimize the total annual cost, which includes investment costs of heat exchangers and pumps, operating costs (pumping) and energy costs for utilities. Three case studies of four, six and nine streams are selected to demonstrate the accuracy of the method. Reductions of 0.55%, 23.5% and 14.78% are obtained in total annual cost for the selected streams, respectively.
Originality/value
In the present study, a reliable method is used to simulate and optimize design parameters and the economic optimization of the heat exchanger networks. Taking into account the importance of shell and tube heat exchangers in industrial applications and the complexity in their geometry, the PSO-GA methodology is adopted to obtain an optimal geometric configuration. The total annual cost is chosen as the objective function. Applying this technique to case studies demonstrates its ability to accurately design heat exchangers to optimize the objective function of the heat exchanger networks by giving the detail of design.
Details
Keywords
Oluwafemi Ajayi and Reolyn Heymann
Energy management is critical to data centres (DCs) majorly because they are high energy-consuming facilities and demand for their services continue to rise due to rapidly…
Abstract
Purpose
Energy management is critical to data centres (DCs) majorly because they are high energy-consuming facilities and demand for their services continue to rise due to rapidly increasing global demand for cloud services and other technological services. This projected sectoral growth is expected to translate into increased energy demand from the sector, which is already considered a major energy consumer unless innovative steps are used to drive effective energy management systems. The purpose of this study is to provide insights into the expected energy demand of the DC and the impact each measured parameter has on the building's energy demand profile. This serves as a basis for the design of an effective energy management system.
Design/methodology/approach
This study proposes novel tunicate swarm algorithm (TSA) for training an artificial neural network model used for predicting the energy demand of a DC. The objective is to find the optimal weights and biases of the model while avoiding commonly faced challenges when using the backpropagation algorithm. The model implementation is based on historical energy consumption data of an anonymous DC operator in Cape Town, South Africa. The data set provided consists of variables such as ambient temperature, ambient relative humidity, chiller output temperature and computer room air conditioning air supply temperature, which serve as inputs to the neural network that is designed to predict the DC’s hourly energy consumption for July 2020. Upon preprocessing of the data set, total sample number for each represented variable was 464. The 80:20 splitting ratio was used to divide the data set into training and testing set respectively, making 452 samples for the training set and 112 samples for the testing set. A weights-based approach has also been used to analyze the relative impact of the model’s input parameters on the DC’s energy demand pattern.
Findings
The performance of the proposed model has been compared with those of neural network models trained using state of the art algorithms such as moth flame optimization, whale optimization algorithm and ant lion optimizer. From analysis, it was found that the proposed TSA outperformed the other methods in training the model based on their mean squared error, root mean squared error, mean absolute error, mean absolute percentage error and prediction accuracy. Analyzing the relative percentage contribution of the model's input parameters based on the weights of the neural network also shows that the ambient temperature of the DC has the highest impact on the building’s energy demand pattern.
Research limitations/implications
The proposed novel model can be applied to solving other complex engineering problems such as regression and classification. The methodology for optimizing the multi-layered perceptron neural network can also be further applied to other forms of neural networks for improved performance.
Practical implications
Based on the forecasted energy demand of the DC and an understanding of how the input parameters impact the building's energy demand pattern, neural networks can be deployed to optimize the cooling systems of the DC for reduced energy cost.
Originality/value
The use of TSA for optimizing the weights and biases of a neural network is a novel study. The application context of this study which is DCs is quite untapped in the literature, leaving many gaps for further research. The proposed prediction model can be further applied to other regression tasks and classification tasks. Another contribution of this study is the analysis of the neural network's input parameters, which provides insight into the level to which each parameter influences the DC’s energy demand profile.
Details
Keywords
Sandeep Kumar Hegde and Monica R. Mundada
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio…
Abstract
Purpose
Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio vasculardisease (CVD) and chronic kidney disease (CKD) are major chronic diseases responsible for millions of death. Each of these diseases is considered as a risk factor for the other two diseases. Therefore, noteworthy attention is being paid to reduce the risk of these diseases. A gigantic amount of medical data is generated in digital form from smart healthcare appliances in the current era. Although numerous machine learning (ML) algorithms are proposed for the early prediction of chronic diseases, these algorithmic models are neither generalized nor adaptive when the model is imposed on new disease datasets. Hence, these algorithms have to process a huge amount of disease data iteratively until the model converges. This limitation may make it difficult for ML models to fit and produce imprecise results. A single algorithm may not yield accurate results. Nonetheless, an ensemble of classifiers built from multiple models, that works based on a voting principle has been successfully applied to solve many classification tasks. The purpose of this paper is to make early prediction of chronic diseases using hybrid generative regression based deep intelligence network (HGRDIN) model.
Design/methodology/approach
In the proposed paper generative regression (GR) model is used in combination with deep neural network (DNN) for the early prediction of chronic disease. The GR model will obtain prior knowledge about the labelled data by analyzing the correlation between features and class labels. Hence, the weight assignment process of DNN is influenced by the relationship between attributes rather than random assignment. The knowledge obtained through these processes is passed as input to the DNN network for further prediction. Since the inference about the input data instances is drawn at the DNN through the GR model, the model is named as hybrid generative regression-based deep intelligence network (HGRDIN).
Findings
The credibility of the implemented approach is rigorously validated using various parameters such as accuracy, precision, recall, F score and area under the curve (AUC) score. During the training phase, the proposed algorithm is constantly regularized using the elastic net regularization technique and also hyper-tuned using the various parameters such as momentum and learning rate to minimize the misprediction rate. The experimental results illustrate that the proposed approach predicted the chronic disease with a minimal error by avoiding the possible overfitting and local minima problems. The result obtained with the proposed approach is also compared with the various traditional approaches.
Research limitations/implications
Usually, the diagnostic data are multi-dimension in nature where the performance of the ML algorithm will degrade due to the data overfitting, curse of dimensionality issues. The result obtained through the experiment has achieved an average accuracy of 95%. Hence, analysis can be made further to improve predictive accuracy by overcoming the curse of dimensionality issues.
Practical implications
The proposed ML model can mimic the behavior of the doctor's brain. These algorithms have the capability to replace clinical tasks. The accurate result obtained through the innovative algorithms can free the physician from the mundane care and practices so that the physician can focus more on the complex issues.
Social implications
Utilizing the proposed predictive model at the decision-making level for the early prediction of the disease is considered as a promising change towards the healthcare sector. The global burden of chronic disease can be reduced at an exceptional level through these approaches.
Originality/value
In the proposed HGRDIN model, the concept of transfer learning approach is used where the knowledge acquired through the GR process is applied on DNN that identified the possible relationship between the dependent and independent feature variables by mapping the chronic data instances to its corresponding target class before it is being passed as input to the DNN network. Hence, the result of the experiments illustrated that the proposed approach obtained superior performance in terms of various validation parameters than the existing conventional techniques.
Details