Performance analysis of selected metaheuristic optimization algorithms applied in the solution of an unconstrained task

Łukasz Knypiński (Institute of Electrical Engineering and Electronics, Poznan University of Technology, Poznan, Poland)

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering

ISSN: 0332-1649

Article publication date: 19 November 2021

Issue publication date: 26 August 2022

1218

Abstract

Purpose

The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor.

Design/methodology/approach

A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function.

Findings

The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used.

Originality/value

The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.

Keywords

Citation

Knypiński, Ł. (2022), "Performance analysis of selected metaheuristic optimization algorithms applied in the solution of an unconstrained task", COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, Vol. 41 No. 5, pp. 1271-1284. https://doi.org/10.1108/COMPEL-07-2021-0254

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Łukasz Knypiński.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Unconditional optimization algorithms can be divided into deterministic and nondeterministic methods (Azhir, 2019). Among deterministic methods, there are gradient free and gradient methods.

In the case of deterministic algorithms, the global minimum is sought along with successive search directions. For gradient-free deterministic methods, the search directions are determined according to rules adopted before beginning the optimization process (Larson et al., 2019). In this group, it can be listed the following two types of methods: simple search methods (Rosenbrock and Hooka-Jeevesa methods) and methods with minimization according to search vector (Powell and Gauss-Seidel methods). However, in the group of gradient methods, the search direction is created on the basis of the objective function gradient at an actual point. For the group of gradient methods gradient descent and conjugate gradient method can be included. All deterministic methods use only a single point to search for the extreme point.

The very rapid development of computer hardware and the increasing computing capabilities of computers now allow for the performance of design calculations for electromagnetic devices using finite element analysis models (Arnoux et al., 2015; Barański et al., 2019; Driesen et al., 2001; Demenko and Stachowiak, 2020; Ibrahim et al., 2019; Wardach et al., 2018). Simplified lumped models are often replaced by more accurate finite element method (FEM) models.

FEM models are more complicated than the lumped parameters models. The mathematical model of electromagnetic devices is described by systems of equations describing multi-physics phenomena. In such models, the objective function very often consists of several partial criteria (Knypiński, 2021). The partial criteria are established by the functional parameters of the device, such as force, electromagnetic torque, mass and efficiency (Mutluer, 2021). The functional parameters are determined on the basis of the electromagnetic field distribution (Nowak et al., 2015). The accuracy of FEM models depends on many parameters as follows: mesh density, step time length and assigned allowable computational error. Therefore, it is very difficult to determine the gradient of the objective function in FEM models. Gradient methods are not applied to solve optimization problems described by FEM models.

On the other hand, a systematic development of nondeterministic algorithms has been observed over the past 40 years. One important step in the expansion of such algorithms was the development of the genetic algorithm (GA) approach (Holland, 1985). Another important milestone was the development of a mathematical model of herd behavior (Reynolds, 1987). The following years saw a more rapid development of nondeterministic algorithms (nowadays called heuristic algorithms). Afterward, the ant colony optimization (ACO) and particle swarm optimization (PSO) (Colorni et al., 1992; Kennedy and Eberhart, 1995) methods were developed. More intensive development of heuristic algorithms started in, 2005; it was only then that, many new algorithms were developed. The following algorithms were designed: the artificial bee colony (ABC), the cuckoo search (CS), the bat algorithm (BA), the brain storm optimization algorithm (BSO), the gray wolf optimization algorithm (GWO) and the whale optimization algorithm (WAO) (Basturk and Karaboga, 2006; Yang and Deb, 2009; Shi, 2011; Mirjalili and Lewisa, 2016). The years in which the selected algorithms were developed are given in Table 1. The latest algorithms are often referred to as nature inspired optimization algorithms (Xin-She, 2014).

Metaheuristic algorithms (MAs) (this term is used for selected heuristic algorithms) are very well suited to solving optimization problems where the mathematical models are described by FEM models (Mutluer et al., 2020). The main advantage of these optimization algorithms is the use of a group of individuals (particles, bats, cuckoos, wolves, etc). for searching a global extreme. These individuals can compete or cooperate with each other. In the case of the MA, it is also easy to attach a constraint function to the optimization algorithms. The penalty function is often used to solve optimization problems with constraints (Knypiński, 2021; Mutluer et al., 2020).

This research was inspired by an attempt to identify the best algorithm among several MAs investigated.

This paper presents an analysis of the performance of selected MAs. Section 2 briefly characterizes the investigated algorithms. The performance analysis for analytical benchmark functions is discussed in Section 3. Section 4 presents the application of the studied algorithms to solve a technical optimization problem. The conclusions are presented in Section 5.

2. Selected metaheuristic optimization algorithms

2.1 Particle swarm optimization

The PSO algorithm was developed by Kennedy and Eberhart (1995). This approach was based on the herd behavior of flocks of birds and fish shoals. During the optimization process, all particles cooperate with each other to find a global extreme point. Each particle j is a single solution to the problem under analysis and is described by: position vector xj and velocity vector vj. All particles form a swarm system. Each particle has information about the positions of the leader xB (the best adapted particle in the swarm), the direction of motion in the previous iteration vj and the position of self-local best xL. The velocity of the j-th particle in the k-th time step is calculated as follows (Devarapalli et al., 2021; Knypiński and Nowak, 2013):

(1) vkj=w1vk1j+αr1(xLj − xk-1j)+βr1(xG − xk-1j)
where w1 is the inertia factor, vk-1j, xk-1j are vectors of velocity and the position of the j-th particle at the k-1 time step, respectively, α, β are learning coefficients, r1, r2 are random numbers from (0, 1).

The particle position vector is determined as follows:

(2) xkj=xk-1j+vkj

The block diagram of the PSO algorithm is shown in Figure 1.

2.2 Genetic algorithm

In genetic algorithms, the optimization process is implemented by the natural selection mechanism. Individuals with various adaptations (objective function) to environmental conditions fight for survival. The optimization process is carried out on a population of individuals called a generation. In subsequent iterations of the optimization algorithm, i.e. generations, three genetic operations responsible for improving the average adaptation of the population are carried out. The following operations are performed: reproduction (selection), crossover and mutation. To improve the convergence of the GA, the simple elitism procedure is very often applied (Knypiński and Nowak, 2013).

2.3 Bat algorithm

The mathematical model of the BA was developed on the basis of observations of echo-location properties in small bat species. The optimization procedure was applied in 2010 (Yang, 2010). Each bat represents a single solution to the problem under analysis. All the bats form a colony. Each bat is described by: velocity vector (vj), position vector (xj), variable frequency (Fj), pulse emission (rj) and loudness (Aj). The search for the global extreme takes place by randomly searching a permissible area. The colony leader is characterized by the best objective function in the k-th iteration.

The position vector for the j-th bat in the k-th iteration is calculated as follows:

(3) xkj=xk − 1j+[vk − 1j+Fj(xk − 1j − xB)]
(4) Fj=Fmaxβ(Fmax − Fmin)
where xB is the position vector of the best bat in the colony, β is a randomly selected number from the range (0, 1), Fmin and Fmax are the minimum and maximum frequencies.

To improve the quality of the optimization process a trial shift of a random bat is performed in each iteration of the algorithm. The trial position x* near the selected bat is determined as follows:

(5) (x*)kj=xkj+αAav
where α is a random number from the range (0, 1) and Aav is the average loudness of the bat colony in the k-th time step.

If the trial shift point has a higher value of the objective function than xkj, then xkj is updated by a new x* (Knypiński, 2017). The loudness Aj and rate of pulse emission rj for the j-th BA are determined as follows:

(6) Ak+1j=ζAkj,rk+1j=r0[1 − exp( − γj)]
where γ and ζ are constant factors of the BA and r0 is the initial value of emission rate.

The block diagram of the BA algorithm is shown in Figure 2 (Knypiński, 2017).

2.4 Cuckoo search

The optimization process is carried out on a population of individuals. In the mathematical model of the algorithm, cuckoo nests and eggs are introduced (Li et al., 2020). Nests are potential locations where other cuckoos may lay eggs. Eggs, on the other hand, are acceptable solutions to the problem under analysis. All cuckoos lay their eggs in known locations – nests.

In the natural environment, a cuckoo is a breeding predator and will lay eggs in the nests of foreign birds. If the host birds discover a foreign egg, they may remove it from their own nest or leave the nest and build a new one in a new location. In the CS algorithm, such phenomena are taken into account by creating a certain number of new nests in new locations. This assumption is implemented by creating a number of new nests with probability pa.

The candidate for the new positions of each j-th nest in the k + 1 iteration is determined by the following equation:

(7) xk+1j=xkj+ακ(λ)
where α > 0 is the step size scaling factor, κ(λ) is the Levy flight coefficient.

The random Levy flight coefficient κ(λ) is drawn from the Levy distribution and can be calculated as follows:

(8) κ(λ)=kλ,(1<λ3)

2.5 Only best individual algorithm

The only best individual (OBI) method was developed by the author to present the importance of a group leader in the heuristic optimization process during a lecture given to students in the last semester of their master’s degree. This is a very simple optimization method that can be compared to other metaheuristic methods. The optimization process is carried out on a group of individuals. The position of the leader (the best adapted individual) of each iteration must be known. The position of the j-th individual at the k-th time step is determined as follows:

(9) xkj=xk − 1j+r1ak(xk1j − xB)
where r1 is the random number from range (0, 1), a is the coefficient that depends on the iteration number.

At the beginning of the optimization process, the larger values of the coefficient are adopted. The optimization algorithm is more of a global search algorithm. During the process, the value of this coefficient is reduced. The value of the coefficient is determined according to the following formula (Knypiński et al., 2020):

(10) ak=a1+a2e − k
where a1, a2 are constant factors.

The part of the algorithm discussed above causes all individuals to move toward the leader. If the analyzed objective function has several extremes, it is very easy to get stuck at a local extreme point. To improve the convergence of the algorithm, several new individuals are generated. The number of new individuals in the OBI algorithm is defined as NN. The location point of each of the NN individuals is calculated as follows:

(11) xkj=xr+r2b(xr − xB)
where xr is the randomly generated location of an individual, r2 is the random number from range (0, 1) and b is the constant factor.

The values of the objective functions are determined for all individuals at the end of each iteration. Then, the NN number of the worst individuals is eliminated.

The block diagram of the OBI algorithm is shown in Figure 3.

3. Convergence analysis of selected metaheuristic algorithms using analytical functions

To carry out performance analysis, all developed optimization procedures were used to determine the minima of two analytical functions. Two test functions are used to analysis the performance of algorithms, namely, the Rosenbrock function and the Himmelblau function.

The Rosenbrock function is defined by the following equation:

(12) f1(x1,x)2=(1 − x1)2+100(x − 2x12)2
where x1 is in the range (−2.0, 2.0), x2 is in the range (−1.5, 3.0).

Rosenbrock function has one global minimum equal to 0 at points x1 = 1 and x2 = 1. Determining a global minimum inside a long flat “Rosenbrock” valley is very difficult.

Himmelblau function is a multi-modal test function with four local minima and one global maximum. The four local minima are equal to 0 and have the following coordinates: (3,2), (−2.80511, 3.13131), (−3.7793, −3.28318) and (3.58442, −1.84812). The maximum at point x1 = 0.2708 and x2 = −0.9230 is equal to 181.617. The Himmelblau function is described by the following equation:

(13) f2(x1,x2)=(x12+x2 − 11)2+(x1+x22-7)2
where x1 is in the range (−6.0, 6.0), x2 is in the range (−6.0, 6.0).

All the studied optimization procedures were developed by the author in the Borland Delphi 7.0 and Python environments. All optimization procedures were developed by the authors. The coefficients for each optimization were selected on the basis of many trial calculations during the investigation in the previous research works (Knypiński and Nowak, 2013; Knypiński et al., 2017; Knypiński, 2017; Knypiński et al., 2021). Optimization procedures containing each method (PSO, GA, BA, CS and OBI) were repeated 20 times. In each case under analysis, the number of individuals was N = 50 and the maximum number of iterations kmax = 40 was adopted as a stop criterion. The analysis was deliberately performed for a smaller number of individuals. The following values of the PSO coefficients were used: w = 0.2, α = 0.35 and β = 0.45. The values of the PSO coefficients were assumed on the basis of many computer simulations to ensure good convergence of the optimization procedure. Next, calculations were performed for the GA procedure for the probability of mutation pm = 0.006. The optimization procedure consists of the following three genetic operators: reproduction, crossover and mutation (Knypiński and Nowak, 2013). Additionally, a simple elitism procedure was used to save the best individual during genetic operations, especially the mutation procedure. The roulette wheel of reproduction and one cut-point in chromosome crossover methods were applied. An improved crossover procedure was also applied (Knypiński, 2021). The values of the parameters for the BA procedure were assumed according to the previous experiment (Knypiński, 2017). The following values were adopted: frequency range Fmin = 0, Fmax = 1.0, initial pulse emission value r0 = 0, initial loudness value A0 = 1, ξ = 0.75 and γ = 0.5. For the CS procedure, the following parameters were defined: number of cuckoos equal to N = 50, number of nests equal to n = 62 and probability pa = 0.25. For the OBI procedure the following parameters were assumed: a1 = 0.3, a2 = 2.0, b = 0.5 and NN = 5. The value of the coefficient depends on the defined ranges of design variables. For the OBI algorithm, the coefficients were selected on the basis of a number of simulation calculations.

3.1 Calculation for f1 function

Calculations were performed for the f1(x1, x2) function. The optimization script was run 20 times for each optimization procedure. The results obtained are presented in Table 2. The respective columns list the value of the objective function for the best result, the worst solution, and the average value of the objective function, population variance (PV) for all the obtained solutions together with the standard deviation (SD).

Figure 4 presents a plot of average loudness (A) and average rate emission (r) values for selected iterations. The convergence process for all the studied algorithms (the best optimization process) is presented in Figure 5.

It can be observed, that all the optimization procedures (PSO, GA, BA, CS and OBI) correctly determined a point near the global minimum. The investigated function is quite difficult, a large area of Rosenbrock’s “valley” is very much flattened. The optimization processes required the following number of call functions, i.e. number of calculations of the objective function: PSO = 2,000, GA = 4,108, BA = 2,000, CS = 1,778 and OBI = 2,201. The best result was obtained for the PSO and OBI algorithms. The lowest quality of all the statistical data was obtained for the BA. Also, the best SD was obtained for the PSO algorithm.

3.2 Calculations for f2 function

Next, the calculations were performed for the f2(x1, x2) function. The results are shown in Table 3.

In the case of the multi-extreme function, the best results were obtained for the GA and CS algorithms. However, the best SD is achieved in the CS procedure. It can be noted, that the GA algorithm has a relatively high standard deviation and mean for both analyzed functions. Such results are due to the fact that the number of individuals in the population is too small. Thus, it can be concluded, that both functions are difficult for the BA algorithm. Bats moving randomly in search of space can find points near different local minima in successive time steps.

4. Optimization of line-start permanent magnet synchronous

To analyze the efficiency of the selected algorithms, the optimization of a line-start permanent magnet synchronous (LSPMSM) motor was performed. The selected optimization procedures under analysis were connected with a mathematical model of the LSPMSM. The model was developed using Maxwell software.

The main objective of the optimization process was to design a new rotor structure for the stator of an induction motor. In new designs of electric motors, especially PM motors, the steady-state parameters are very important (Kim et al., 2009). In the case of the LSPMSM, transient parameters, such as starting torque and synchronization capability should also be considered. By considering only steady-state parameters in the objective function, degradation of transient parameters can be observed (Baek et al., 2011). The transient parameters can be regarded as constraints for the optimal design. They can also be taken into account in a multi-objective function, as additional components.

The LSPMSM was described by the following four design variables: r is the distance between poles, l is the length of the permanent magnet, g is the thickness of the permanent magnet and N is the number of turns in the stator slot. The structure of the motor is presented in Figure 6. The motor may be used to drive crane equipment in the future, so it should be characterized by a high starting torque. The shape of the rotor bars was selected to ensure a significant value of starting torque.

The multi-objective (Arsyad et al., 2021) function consists of two components representing the steady-state parameters and transient parameters. The objective function has the following form:

(14) f(l,g,r,N)=λ[(η(l,g,r,N)η0)(cosφ(l,g,r,N)cos0φ)]+ω[(TS(l,g,r,N)TS0)(T80(l,g,r,N)T800)]
where η (l, g, r, N), cosφ (l, g, r, N), TS (l, g, r, N) and T80 (l, g, r, N) are the values of efficiency, power factor, starting torque and electromagnetic torque for a speed equal to 80% of the synchronous speed; accordingly, η0, cos0φ, T0S and T080 are the mean values of these parameters for the initial population and λ, ω are the weighting factors.

The calculation was performed for the four-pole LSPMSM with power PN = 3 kW. The motor was supplied from a three-phase grid with a mains voltage equal to UN = 400 V. The design parameters of the motor are as follows: stator outer diameter DS = 154 mm, stator inner diameter Di = 95 mm and rotor outer diameter do = 94 mm. The number of stator slots is equal to Ns = 36 while the number of slots in the rotor is Nr = 28.

Optimization calculations were performed for the PSO, AG, CS and OBI algorithms. Due to achieving the worst results during the analysis of the analytical function, the BA algorithm was not applied. The optimization process for each algorithm was carried out 10 times for independent starting populations. The calculations were performed for 38 individuals per population. The maximum number of iterations was 30. The values of the objective function weighting parameters (λ and ω) were selected based on many trial calculations using Excel. During the optimization process, the coefficient was declared: λ = 0.63 and ω = 0.37.

The mathematical model consisted of the following two independent models describing: steady-state and transient state during the start-up process of the motor. To obtain all parameters used in the objective function for one individual/particle/cuckoo, both models must be calculated. The optimization calculation was made on the computer equipped with AMD Ryzen 5 processor and 16 GB RAM. The total calculation time for one call of the objective function is about 3 min. The results of computer calculations are presented in Table 4.

Based on the obtained results, it can be concluded that the efficiency and power factor values are similar for all the analyzed algorithms. Two possible solutions for the permanent magnet length are obtained, i.e. l54 mm (PSO, CS) or l49 mm (AG, OBI). Also, similar values of r parameters are obtained for the same optimization algorithms. The best values of parameters in the transient state are obtained using the CS algorithm. The best steady-state parameters are obtained using the PSO algorithm.

5. Conclusions

The efficiency analysis of selected metaheuristic optimization algorithms was compared. In the first stage, calculations for two analytical functions were made. The number of individuals of the studied algorithms was not large. The number of individuals used was deliberately kept small; due to the frequent use of the finite element mathematical model for the optimal design of electromagnetic devices, the total calculation time is very long. Very often, researchers look for a compromise between the calculation time and the accuracy (quality) of the optimal result. The calculation time can be reduced by its: application to models with a lower grid density or application of a smaller number of individuals and a smaller number of iterations.

Depending on the type of function analyzed (single-objective or multi-objective), the best results were obtained for the PSO and GA methods, respectively. The best mean value was obtained for the PSO and CS. Also, the best standard deviations were obtained for the PSO and CS. The BA algorithm had the worst statistical results among the analyzed algorithms.

In the case of the LSPMSM optimization problem, the results obtained were similar. Admittedly, it can be observed that the two preferred vectors of design variables were obtained by the optimization algorithms. After a thorough analysis of the obtained results, it can be concluded that a large number of available algorithms can lead to similar solutions in the case of a real technical object, and thus extend the total calculation time of the optimization procedure significantly. With this in mind, before starting any design calculations, the correct optimization procedure should be selected.

Figures

Block diagram of the PSO algorithm

Figure 1.

Block diagram of the PSO algorithm

Block diagram of the BA

Figure 2.

Block diagram of the BA

Block diagram of the OBI algorithm

Figure 3.

Block diagram of the OBI algorithm

Plot of Aav and rav for selected iterations of BA

Figure 4.

Plot of Aav and rav for selected iterations of BA

Change in the value of the f1 function for the best individuals for the investigated algorithms

Figure 5.

Change in the value of the f1 function for the best individuals for the investigated algorithms

LSPMSM construction with marked design variables

Figure 6.

LSPMSM construction with marked design variables

Years of algorithm establishment

Algorithm Acronym Year
Genetic algorithm GA 1985
Ant colony optimization ACO 1992
Particle swarm optimization PSO 1995
Artificial bee colony ABC 2005
Cuckoo search CS 2009
Bat algorithm BA 2010
Brain storm optimization BSO 2011
Gray wolf optimization GWO 2014
Whale optimization algorithm WOA 2016

Statistical data for different optimization algorithms for the Rosenbrock function

Algorithm Best Worst Mean PV SD
PSO 1.29E-05 9.13E-03 1.77E-03 0.098E-03 3.28E-03
GA 4.50E-04 73.6E-03 27.1E-03 0.51E-03 23.9E-03
BA 6.76E-04 80.3E-03 22.5E-03 0.78E-03 29.4E-03
CS 3.96E-04 36.8E-03 6.95E-03 0.11E-03 10.6E-03
OBI 3.11E-05 52.0E-03 5.82E-03 0.21E-03 15.4E-03

Statistical data for different optimization algorithms for the Himmelblau function

Algorithm Best Worst Mean PV SD
PSO 4.419E-3 0.363256 0.081733 0.726E-3 0.092668
GA 1.128E-3 0.271591 0.077230 0.623E-3 0.089039
BA 9.475E-3 0.277210 0.093658 0.986E-3 0.093954
CS 3.532E-3 0.064641 0.033158 0.518E-3 0.023714
OBI 3.657E-3 0.323527 0.080921 1.141E-3 0.114747

Results of simulation calculations for different optimization algorithms

Algorithm r [mm] g [mm] l [mm] N η [%] cosφ Ts [Nm] T80 [Nm]
PSO 9.39 2.64 54.12 41 93.04 0.998 22.35 80.54
AG 11.51 2.91 49.29 41 93.53 0.981 23.45 82.62
CS 9.86 2.68 54.26 41 93.19 0.985 24.46 83.06
OBI 11.49 2.91 49.39 41 93.39 0.987 22.19 80.48

References

Arnoux, P.H., Caillard, P. and Gillon, F. (2015), “Modeling finite-element constraint to run an electrical machine design optimization using machine learning”, IEEE Transactions on Magnetics, Vol. 51 No. 3, doi: 10.1109/TMAG.2014.2364031.

Arsyad, H., Suyuti, A., Said, S. and Akil, Y. (2021), “Multi-objective dynamic economic dispatch using fruit fly optimization method”, Archives of Electrical Engineering, Vol. 70 No. 2, pp. 351-366.

Azhir, E., Jafari Navimipour, N., Hosseinzadeh, M., Sharifi, A. and Darwesh, A. (2019), “Deterministic and non-deterministic query optimization techniques in the cloud computing”, Concurrency and Computation, Practice and Experience, Vol. 31 No. 7, pp. 1-17, doi: 10.1002/cpe.5240.

Baek, B., Kim, B. and Kwo, B. (2011), “Practical optimum design based on magnetic balance and copper loss minimization for a single-phase line start PM motor”, IEEE Transactions on Magnetics, Vol. 47 No. 10, pp. 3008-3011.

Barański, M., Szeląg, W. and Łyskawiński, W. (2019), “An analysis of a start-up process in LSPMSMs with aluminum and copper rotor bars considering the coupling of electromagnetic and thermal phenomena”, Archives of Electrical Engineering, Vol. 68 No. 4, pp. 933-946.

Basturk, B. and Karaboga, D. (2006), “An artificial bee colony (ABC) algorithm for numeric function optimization”, IEEE Swarm Intelligence Symposium 2006, Indianapolis, IN.

Colorni, A., Dorigo, M., Maniezzo, V., Varela, F. and Bourgine, P. (1992), “Distributed optimization by ant colonies”, Computer Science, pp. 134-142.

Demenko, A. and Stachowiak, D. (2020), “Finite element and experimental analysis of an axisymmetric electromechanical converter with a magnetostrictive rod”, Energies, Vol. 13 No. 5, pp. 1-12, doi: 10.3390/en13051230.

Devarapalli, R., Bhattacharyya, B., Sinha, N.K. and Dey, B. (2021), “Amended GWO approach based multi-machine power system stability enhancement”, ISA Transactions, Vol. 109, pp. 152-174, doi: 10.1016/j.isatra.2020.09.016.

Devarapalli, R., Sinha, N., Venkateswara Rao, B., Knypinski, Ł., Jaya Naga Lakshmi, N. and Márquez, F.P.G. (2021), “Allocation of real power generation based on computing over all generation cost: an approach of salp swarm algorithm”, Archives of Electrical Engineering, Vol. 70 No. 2, pp. 337-349.

Driesen, J., Belmans, R.M. and Hameyer, K. (2001), “Finite-element modeling of thermal contact resistances and insulation layers in electrical machines”, IEEE Transactions on Industry Applications, Vol. 37 No. 1, pp. 15-20, doi: 10.1109/28.903121.

Holland, J. (1985), “Genetic algorithms and the optimal allocations of trials”, Journal of Computing, Vol. 2 No. 2, pp. 8-105.

Ibrahim, I., Mohammadi, M., Ghorbanian, V. and Lowther, D. (2019), “Effect of pulsewidth modulation on electromagnetic noise of interior permanent magnet synchronous motor drives”, IEEE Transactions on Magnetics, Vol. 55 No. 10, pp. 1-5.

Kennedy, J. and Eberhart, R. (1995), “Particle swarm optimization”, Proceedings of ICNN'95 – International Conference on Neural Networks, pp. 1-10, doi: 10.1109/ICNN.1995.488968.

Kim, W., Kim, K., Kim, S., Kang, D., Go, S., Lee, H., Chun, Y. and Lee, J. (2009), “A study on the optimal rotor design of LSPM considering the starting torque and efficiency”, IEEE Transactions on Magnetics, Vol. 45 No. 3, pp. 1808-1811.

Knypiński, Ł. (2021), “Constrained optimization of line-start PM motor nased on the gray wolf optimizer”, Eksploatacja i Niezawodnosc – Maintenance and Reliability, Vol. 23 No. 1, pp. 1-10.

Knypiński, Ł. (2017), “Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm”, Open Physics, Vol. 15 No. 1, pp. 965-970.

Knypiński, Ł., Jędryczka, C. and Demenko, A. (2017), “Influence of the shape of squirrel cage bars on the dimensions of permanent magnets in an optimized line-start permanent magnet synchronous motor”, Compel - The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 36 No. 1, pp. 298-308.

Knypiński, Ł. and Nowak, L. (2013), “Optimization of the permanent magnet brushless DC motor employing finite element method”, Compel - The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 32 No. 4, pp. 1189-1202.

Knypiński, Ł., Kuroczycki, S. and Marquez, F.P.G. (2021), “Minimization of torque ripple in the brushless DC motor using constrained cuckoo search algorithm”, Electronics, Vol. 10 No. 18, pp. 2299-1-2299-20.

Knypiński, Ł., Pawełkoszek, K. and Le Menach, Y. (2020), “Optimization of low-power line-start PM motor using gray wolf metaheuristic algorithm”, Energies, Vol. 13 No. 5, doi: 10.3390/en13051186.

Larson, J., Menickelly, M. and Wild, S. (2019), “Derivative-free optimization methods”, Acta Numerica, Vol. 28, pp. 287-404, doi: 10.1017/S0962492919000060.

Li, L., Xiao, D., Lie, H., Zhang, T. and Tian, T. (2020), “Using cuckoo search algorithm with Q-L learning genetic operation to solve the problem of logistic distribution center location”, Mathematics, Vol. 8 No. 2, doi: 10.3390/math8020149.

Mirjalili, S. and Lewisa, A. (2016), “The whale optimization algorithm”, Advances in Engineering Software, Vol. 95, pp. 51-67, doi: 10.1016/j.advengsoft.2016.01.008.

Mutluer, M. (2021), “Analysis and design optimization of permanent magnet motor with external rotor for direct driven mixer”, Journal of Electrical Engineering and Technology, Vol. 16 No. 3, pp. 1527-1538, doi: 10.1007/s42835-021-00706-8.

Mutluer, M., Sahman, A. and Cunkas, M. (2020), “Heuristic optimization based on penalty approach for surface permanent magnet synchronous machines”, Arabian Journal for Science and Engineering, Vol. 45 No. 8, pp. 6751-6767, doi: 10.1007/s13369-020-04689-y.

Nowak, L., Knypiński, Ł., Jędryczka, C. and Kowalski, K. (2015), “Decomposition of the compromise objective function in the permanent magnet synchronous motor optimization”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 34 No. 2, pp. 496-504, doi: 10.1108/COMPEL-07-2014-0173.

Reynolds, C. (1987), “Flocks, herds, and schools: a distributed behavioral model”, ACM Siggraph Computer Graphics, Vol. 21 No. 4, pp. 25-34.

Shi, Y. (2011), “Brain storm optimization algorithm”, Advances in Swarm Intelligence, pp. 303-309, doi: 10.1007/978-3-642-21515-5_36.

Wardach, M., Palka, R., Paplicki, P. and Bonislawski, M. (2018), “Novel hybrid excited machine with flux barriers in rotor structure”, Compel - The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 37 No. 4, pp. 1489-1499.

Xin-She, Y. (2014), Nature-Inspired Optimization Algorithms, Elsevier.

Yang, X.S. (2010), “A new metaheuristic bat-inspired algorithm”, Studies in Computational Intelligence, Vol. 284, pp. 65-74, doi: 10.1007/978-3-642-12538-6_6.

Yang, X.S. and Deb, S. (2009), “Cuckoo search via lévy flights”, World Congress on Nature and Biologically Inspired Computing, pp. 210-214.

Further reading

Yan, W., Chen, H., Liu, X., Ma, X., Lv, Z., Wang, X., Palka, R., Chen, L. and Wang, K. (2018), “Design and multi-objective optimization of switched reluctance machine with iron loss”, IET Electric Power Applications, Vol. 13 No. 4, pp. 435-444, doi: 10.1049/iet-epa.2018.5699.

Acknowledgements

This research was funded by the Poznan University of Technology, grant number [0212/SBAD/0543].

Corresponding author

Łukasz Knypiński can be contacted at: lukasz.knypinski@put.poznan.pl

About the author

Łukasz Knypiński received his MS degree and PhD in Electrical Engineering from Poznan University of Technology, Poland, in 2007 and 2016, respectively. Currently, he is working at Poznan University of Technology, at Mechatronics and Electrical Machines Division, Institute of Electrical Engineering and Electronics as an Assistant Professor. His research interests are electrical machines, in particular, permanent magnet brushless DC machines, permanent magnet synchronous motors and nondeterministic optimization strategies. He has published over 70 conference and journal papers on electrical machines, electromagnetic and optimization. Since 2017, he is the Secretary of the Editorial Board of Archives of Electrical Engineering. Łukasz Knypiński can be contacted at: lukasz.knypinski@put.poznan.pl

Related articles