Reference point based evolutionary multi-objective optimization with dynamic resampling for production systems improvement

Amos H.C. Ng (University of Skövde, Skövde, Sweden)
Florian Siegmund (University of Skövde, Skövde, Sweden)
Kalyanmoy Deb (Michigan State University, East Lansing, Michigan, USA)

Journal of Systems and Information Technology

ISSN: 1328-7265

Publication date: 12 November 2018



Stochastic simulation is a popular tool among practitioners and researchers alike for quantitative analysis of systems. Recent advancement in research on formulating production systems improvement problems into multi-objective optimizations has provided the possibility to predict the optimal trade-offs between improvement costs and system performance, before making the final decision for implementation. However, the fact that stochastic simulations rely on running a large number of replications to cope with the randomness and obtain some accurate statistical estimates of the system outputs, has posed a serious issue for using this kind of multi-objective optimization in practice, especially with complex models. Therefore, the purpose of this study is to investigate the performance enhancements of a reference point based evolutionary multi-objective optimization algorithm in practical production systems improvement problems, when combined with various dynamic re-sampling mechanisms.


Many algorithms consider the preferences of decision makers to converge to optimal trade-off solutions faster. There also exist advanced dynamic resampling procedures to avoid wasting a multitude of simulation replications to non-optimal solutions. However, very few attempts have been made to study the advantages of combining these two approaches to further enhance the performance of computationally expensive optimizations for complex production systems. Therefore, this paper proposes some combinations of preference-based guided search with dynamic resampling mechanisms into an evolutionary multi-objective optimization algorithm to lower both the computational cost in re-sampling and the total number of simulation evaluations.


This paper shows the performance enhancements of the reference-point based algorithm, R-NSGA-II, when augmented with three different dynamic resampling mechanisms with increasing degrees of statistical sophistication, namely, time-based, distance-rank and optimal computing buffer allocation, when applied to two real-world production system improvement studies. The results have shown that the more stochasticity that the simulation models exert, the more the statistically advanced dynamic resampling mechanisms could significantly enhance the performance of the optimization process.


Contributions of this paper include combining decision makers’ preferences and dynamic resampling procedures; performance evaluations on two real-world production system improvement studies and illustrating statistically advanced dynamic resampling mechanism is needed for noisy models.



Ng, A., Siegmund, F. and Deb, K. (2018), "Reference point based evolutionary multi-objective optimization with dynamic resampling for production systems improvement", Journal of Systems and Information Technology, Vol. 20 No. 4, pp. 489-512.

Download as .RIS



Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited

1. Introduction

Real-world optimization problems very often involve multiple objectives that have to be considered simultaneously. In terms of production systems improvement, there are almost always at least two objectives to be considered – the targeted condition (Rother, 2009) and the cost of the improvement. As in many real-world multi-objective problems (MOP), it is obvious that these two objectives are in conflict with each other, so that an improvement in one objective can only be obtained at the expense of degradation of the other objective. MOPs can be readily solved by a priori approaches which transform the problems into some single-objective ones by formulating the objective functions into single, weighted-sum functions. However, this is usually not applicable if the decision maker (DM) does not explicitly know how to weigh the various objectives before any optimal alternatives are known. At the same time, it is not easy to understand the effects of the weights, in terms of correlation and nonlinear outcomes, meaning that a small change in weights can alter the solution dramatically. In contrast to any a priori approaches in which the DM has to explicitly determine their preference regarding the objectives before the optimization process, a posteriori approaches aim to find the entire set of best “trade-off”, or so-called Pareto-optimal solutions, so that the DM can decide which solution to implement after the optimization has been completed. The goal of a posteriori optimization approach is therefore to find a converged set of solutions that also feature wide diversity, to spread as much as possible in the objective space and form an optimal trade-off curve/surface, or so-called efficient frontier (EF). This approach is desirable, as it allows the DM to obtain a complete picture about the problem under study, e.g. the relationship between the decision variables and the objectives (Bandaru et al., 2017) and provides the DM with many alternatives to choose from. However, it becomes a critical issue if the optimization process involves computationally expensive function evaluations, e.g. stochastic simulation runs on large-scale, complex models.

Regarding production systems analysis, stochastic simulation is not only a popular tool for the evaluations of long, complex and real-world production systems, but probably the only feasible choice, especially when the processing times and downtimes follow non-exponential or non-normal distributions (Negahban and Smith, 2014). Stochastic simulation is the only available choice for researchers and practitioners in industry alike if more complex flows and other types of variability (e.g. setups) are included in the study of unbalanced production lines. As claimed by Tempelmeier (2003), “If quantitative performance evaluation is carried out at all (in the industry), then in almost any case simulation is the only tool used.” Wang and Chatwin (2005) summarize the three major weaknesses of analytical/mathematical methodologies, compared to computer simulation: analytical evaluation is impractical when it encounters stochastic elements, such as many random and non-linear operations that exist in virtually any manufacturing system; due to randomness in a dynamic system which changes with time, the mathematical modeling of a complex dynamic system requires many simplifications which may cause the models to become invalid; analytical methods are not sufficient for optimization because mathematical models can only be built with simplifying assumptions that may affect the accuracy of the optimization results. Additionally, they also point out that in many cases, one must resort to simulation even though the system under study is analytically tractable because some performance measures of the system have values that can only be computed by running simulations. Having listed all these advantages offered by simulation, it is still an important issue that stochastic simulations rely on running a large number of replications to cope with the randomness and obtain some accurate statistical estimates at a specified level of confidence (Chen and Lee, 2010). While the recent advance in computing technology has greatly ameliorated this issue, Kleijnen (2015), however, correctly points out that modelers also demand greater modeling complexity when the technology advances. Therefore, it is not difficult to understand that the computational cost grows very quickly, with regard to handling a stochastic MOP in which the aim is to both generate a wide-spread Pareto-optimal solution set and, at the same time, ensuring the statistical soundness of the optimal solutions. For instance, a simulation model that takes a CPU 3 min to execute a replication run will cost 250 h, if 10 replications are required and the optimization takes 5,000 evaluations, which for any DM would simply be a prohibitively long waiting period. As a matter of fact, it is not surprising that in a real industrial decision-making situation that involves a short lead time, either the simulation accuracy or the optimality of the solutions, or even both, have to be compromised for the DM if such a stochastic MOP approach has to be used. To address this issue, many techniques that aim to obtain more accurate results with a reduced total number of replications and/or evaluations have been proposed in the literature. In terms of reducing the number of replications, most of these techniques involve the idea that replications should be dynamically allocated to solutions that are believed to be near-optimal. This kind of dynamic allocation of replications is also commonly called dynamic resampling (DR), as the mechanism attempts to identify which solutions should be given more samples during the optimization process. Optimal computing buffer allocation (OCBA) (Chen et al., 2008) is the best example of these advanced statistical techniques that can estimate the optimal number of replications to the solution. These can be combined with various evolutionary algorithms to optimize the accuracy of the optimal solutions. For example, a confidence-based DR integrated into the well-known NSGA-II algorithm to improve the confidence of the Pareto ranking was proposed in Syberfeldt et al. (2010).

On the other hand, in terms of reducing the total number of evaluation for faster convergence toward the EF, a guided search that considers the preference(s) of the DM during the optimization seems to be a promising approach proposed by EMO researchers (Branke, 2008). While it may be impractical for a DM to completely specify the trade-offs between the different objectives as suggested in a priori approaches, it is also reasonable to assume that a DM has at least some rough idea about what he/she would prefer. In the literature related to multi-criteria decision-making (MCDM), there is an abundance of methods that a DM can use to interactively specify his/her preference(s) regarding the objectives. With such interactive methods, procedures of some iterative algorithms can be repeated so that a DM can progressively provide the preference information required to find the most preferred solution. Miettinen et al. (2008) identify three ways in which a DM can provide preference information to the interactive methods: trade-off information; classification-based and reference point (RP) approach. In general, trade-off information concerns determining, to a certain quantity, how much improving a certain objective will deteriorate another one. Such a trade-off relationship can be measured objectively when the move from one feasible solution to another one is considered, so-called objective trade-off. It can also be acquired subjectively when the DM determines how much of the value of one objective can be sacrificed to desirably improve another one at a specified quantity. Therefore, two different schemes, whether to determine at each iteration the objective trade-offs and present them to the DM, or ask the DM to provide the subjective trade-off information, to find a Pareto-optimal solution, are developed in different trade-off-based interactive methods. The latter, subjective methods usually assume the DM has an implicit value function, v, so that the aim of the iterative algorithms is to use the progressively acquired trade-off information to approximate and maximize v.

With classification methods, the DM indicates his/her preferences by classifying which objective function should be improved and which one could be impaired or relaxed from its current values (Miettinen et al., 2008). For example, the satisficing trade-off method STOM, proposed by Nakayama (1995), asks the DM to classify the objective functions into three classes: I, values should be improved; I values can be relaxed and I=, those values are accepted. The DM has to specify the desirable aspiration levels for the functions in I and the upper bounds for functions in I.

In comparison, while classification methods assume that some objective function must be allowed to become worse, a reference point to be selected in the RP approach can be chosen more freely. The RP approach requires the DM to specify RP(s) in terms of aspiration and reservation levels for all objective functions. Pareto-optimal solutions can then be generated through maximizing some achievement scalarizing function (ASF) which is a non-linear aggregation of the objective functions. As an interactive approach, the DM is supposed to learn about the decision situation and to explore different interesting parts of the Pareto-optimal set through freely modifying the RPs. This learning aspect through interactively varying the preferences is more flexible, compared to the two other methods. On a 2D/3D plane, the RP approach would also allow the DM to easily verify whether the obtained results correspond to his/her preferences visually (Mkaouer et al., 2013). At the same time, the RP approach makes it possible to find Pareto-optimal solutions that correspond to multiple preference conditions simultaneously. Additionally, RP-based methods are applicable to MOPs with a large number of objectives (>3), a large number of variables, with linear or non-linear constraints (Chikumbo et al., 2012), partly because the preference relation gives a finer order of vectors of objective space than only the Pareto dominance relation (López-Jaimes and Coello Coello, 2014). With such motivations, several algorithms with the capability to focus the search toward the preference areas in the objective space, specified by RPs, have been proposed.

As reviewed in this introduction, many algorithms that consider the preferences of the DM are targeted at converging faster toward Pareto-optimal solutions. There are also significant efforts in proposing various DR procedures to avoid wasting an abundance of simulation replications to non-optimal solutions. However, to our best knowledge, very few attempts have been made to illustrate the advantages of combining both of these two approaches to further enhance the performance of the algorithm, especially if the problems under study involve computationally expensive evaluations, such as production systems simulation. Therefore, this paper aims to propose a combination of a preference-based guided search with DR into an evolutionary multi-objective optimization (EMO) algorithm to lower both the computational cost in re-sampling and simulation evaluations. Specifically, based on the above discussions about the advantages and flexibility offered by the interactive RP approach, we carry out an experimental, comparative study of performance enhancements of R-NSGA-II augmented with three DR mechanisms when applied to real-world production systems improvement. These DR mechanisms are: time-based, distance-rank and OCBA (the order follows the increasing degree of statistical sophistication). The results will also be compared with the optimizations generated with the original R-NSGA-II and NSGA-II.

In the remainder of this paper, we first introduce how a production systems improvement problem can be formulated into a MOP in Section 2, including an outline of how a RP-based algorithm can enhance the accuracy and efficiency when a low-budget computational cost (i.e. a limited number of re-samples) is sought or allowed. Then, in Section 3, we describe the three DR mechanisms selected for examination in this paper. Section 4 introduces R-NSGA-II and how it can be augmented with the selected DR mechanisms. Experimental results when applied to the DR-augmented R-NSGA-II in two real-world production system simulation models are presented in Section 5, followed by conclusions and future work in Section 6.

2. Production systems improvement as multi-objective problems

Consider a general MOP that consists of m objective functions, fi(x), i = 1,…, m, which can be minimized and maximized with x as a decision (design) vector, comprising n decision variables, xi, within their respective lower bounds (xiL) and upper bounds (xiU). Mathematically:

(1) Maximize/MinimizeF(x)={f1(x),, fm(x)}
Subject to x=x1,x2,,xnT,

For a continuous MOP, we call Φ=i=1nxil,xiuRnas the design space. ℤm is called the objective space with the mapping F: Φ → ℤm that evaluates f1(x),…, fm(x), for x ∈Φ. For a production systems design or improvement problem, xi can be either continuous or discrete. While processing times and availability are continuous by nature, they can be restricted to be discrete in most practical situations. Similarly, layout geometry and conveyor lengths can be formulated as either continuous or discrete.

In many MOPs, where the objectives fi(x) are in conflict with each other, finding a single best optimal design is impossible because improving one objective would deteriorate the other (Deb, 2004). This gives rise to the concept of EF that denotes the best trade-off in ℤm with respect to fi(x). The definition of dominance is essential for an optimization to find and compute x that constituent EF: considering only Min{f1(x),…, fm(x)}, a design vector x1 is said to dominate another solution vector x2, denoted as x1x2, iff f1(x1) ≤fi(x2) ∀i ∈ {1,…, m}, but ∃j∈{1,…, m} s.t. f1(x1) < fi(x2). A decision vector x* ∈Φ is Pareto-optimal if ∄x that dominates x*. In other words, the Pareto-optimal set, PS, consists of x*  that are non-dominated to each other in ℤm. Equivalently, an objective vector z*∈ℤm is Pareto-optimal if the decision vector correspondent to it is Pareto-optimal (Miettinen, 1999). With the above formulation, it is not difficult to understand that an optimization-based design approach will involve a process for finding the PS that forms EF so that a DM can make a final selection on which z* to choose. In the literature, this is referred to as the post-optimality decision-making process that involves some higher-level information, very commonly including some preferences of the DM which were not formulated into the objectives in the MOP.

Production systems improvement problems can be formulated readily as MOPs, as shown earlier in Pehrsson et al. (2011, 2013) and more recently in Pehrsson et al. (2016). The basic idea proposed was based on an observation that many decision-making situations in production system improvement projects can be effectively formulated into MOPs. The primary objective of the MOP is usually related to a key performance index (KPI), such as system throughput or manufacturing lead-time; the second objective of the MOP can be formulated as a function of investments, or steps of changes, or so-called improvement actions, needed to improve the KPI of the system (Pehrsson et al., 2016). For example, if the production system throughput (τ) is the primary KPI as the improvement target, so that f1(x) = maximize(τ), then we can define the total number of changes, i.e. improvement actions, as minimize(f2(x)) in the MOP. If we consider three types of discrete, two-level decision variables: {αi, βi, γi}, i = 1,…, n, where n = number of workstations so that each variable can either be set to its original value or to some improved value, to represent the possible improvements for availability, processing times and repair times, respectively. For a workstation i, such that, e.g. αi = {0.9, 0.95}, βi = {65 s., 60 s} and γi = {20 min., 10 min.}, then mathematically, the second objective in the MOP can be formulated as a summation function:

(2) f2(x)=min(i=1nα^i+i=1nβ^i+i=1nγ^i)

α^i = {0, 1}, β^i = {0, 1}, γ^i = {0, 1} as the binary variables which represent whether correspondent decision variables, αi, βi and γi retain their original values or are changed to some improved values. For instance, with the above example, if αi is not changed (i.e. 0.9), then α^i = 0. Otherwise α^i = 1 if αi is improved to 0.95. Similarly, β^i = 1 if βi is cut to 60 s and γ^i = 1 if γi is reduced to 10 min. It is very important to note that improved values for βi and γi mean their values are reduced.

With this formulation, the second objective function is a discrete function that varies in the range [0, 3n] because the maximum number of changes can only be 3n. Figure 1 illustrates this bi-objective problem graphically, showing how the Pareto-optimal solutions in the objective space can be used to support the decision-making in regarding the selection of the optimal (minimal) changes required to improve the system to achieve the target throughput.

In Figure 1, the original throughput, τ0, represents the current condition when no improvement has been made, i.e. f2(x) = 0. If the target of the KPI is τt, then in the objective space, z*1 denotes the optimal number of changes (ct) required to improve the production system from τ0 to τt. As illustrated, z*t dominates z1 as they have the same f2(x) value. However, a higher τ can be achieved with z*t. On the other hand, in this MOP, τmax can also be a very interesting value for the DM because it denotes the maximum τ that can be obtained for the production line under study when all possible changes that can be made in this MOP are considered. Note that τmax can be further increased if the processing times of the workstations, βi, can be further reduced to some values not considered in the MOP. However, as reducing physical processing times can only be done to a certain extent and αi is always <1 (max. availability 100 %) and γi ≥ 0, there is always a maximum practical τ in a real production system. Any solutions beyond z*max, e.g. z2, which give a τ ≤ τmax with f2(x) value > cmax are dominated by z*max because they require more, un-necessary changes and would certainly be considered as undesirable by the DM.

In a real-world decision-making process, finding z*t is a crucial process especially if the DM already has a clear target KPI value in mind, despite the fact that how this preference information in the objective space can be provided for the optimization process is an interesting issue that has induced research into different methods. On the other hand, although the DM may have the target τt clearly in mind, finding a single z*t, which could be done with a single-objective optimization process can be undesirable because the DM may want to explore more possible solutions around (ct, τt), for several reasons. First, it may be preferable to select a slightly higher value than τt to add a safety factor to τt to compensate some possible evaluation error of the model (simulation) evaluation (see also the discussion below on the estimation of fi(x)) or to address what is called preference correlation. This is a situation where a DM would prefer to have one performance measure higher or lower than intended in order to compensate for another performance measure that is being higher or lower than normal (Rosen et al., 2007). Such a preference correlation situation can occur easily if the DM has explored the existence of some optimal trade-off solutions around τt that exert “knee” points in MOP literature (Branke et al., 2004; Deb and Gupta, 2011), shown as z*k in Figure 2. As illustrated in Figure 2a, despite that τt is the target, but as the cost (required changes, ct–ck) of advancing from z*k to z*t is high, the small gain with τt – τk may not justify so that the DM might relax τt and select z*k as the chosen solution for implementation. On the other hand, for the “knee” case in Figure 2b, the cost of increasing τt to τk, represented as ck - ct, is so small so that the DM would be very comfortable selecting z*k. In both cases, the DM has gained better knowledge about the behavior of the system under study and the decision can be made with greater confidence which could not be easily obtained without finding more trade-off solutions. In this way, the DM adjusts τt according to some broader information gained in a multi-objective optimization process, which cannot be easily acquired from single-objective optimization, and is referred to as “knowledge-driven optimization” (Bandaru et al., 2017).

With a simulation-based optimization approach, all function evaluations are done through the simulation model, giving only an estimation of fi(x). In other words, F(x) can only be estimated by F^(x) through the performance values obtained from simulation replication runs. Stochastic simulation will result in different F^(x) in each replication, so the estimated output objective function values used for the optimization are actually F^(x) = {( j=1rf1,j(x))/n, ,(j=1rfm,j(x))/n}. Therefore, MOP is in general more computational demanding than single-objective optimization, requiring many more function evaluations (Simon, 2013). The fact that stochastic simulation requires multiple replications to reduce the uncertainty of the objective functions evaluations has further increased the computational burden of finding x*. Several important topics for future MOP research is given in Simon (2013), including automatic, on-line adaptation of tuning parameters; hybrid optimization and local search strategies; algorithms that can provide good performance with few function evaluations; efficient algorithms for many (>3) objectives; and the incorporation of user preferences into the algorithms. The DM is not always interested in the whole EF, which has led to the idea that optimization efficiency can be improved by incorporating the preference(s), so that algorithm searches will be focused in the interested area(s) of the EF. This concept of acquiring the preferences from the DM to guide the search of trade-off solution is a central theme in MCDM research.

3. Guided search using reference points

Using a reference point (RP) to indicate a DM’s aspiration levels in different objectives is perhaps the most important way of providing preference information in an interactive MCDM procedure (Branke, 2008). As suggested by Wierzbicki (1980, 1982, 1986), the RP approach can be used to find a weakly, є-properly or Pareto-optimal solution closest to a supplied RP of aspiration level, reflecting desirable values for the objective function, based on optimizing an achievement scalarizing function (ASF). Given a RP, z¯, for MOP of minimizing (f1(x), f2(x), …, fm(x)) as in equation (1), the following single-objective optimization is solved:

(3) Minimize maxi=1m[wifix-z¯i]

Here, wi is the i-th component of a chosen weight vector used for scalarizing the objectives, in which each component of the reference point represents the aspiration levels that the decision maker requires for each objective. For a chosen RP, the closest Pareto-optimal solution, in the sense of the weighted-sum of the objectives, is the target solution found by a RP procedure. If the DM is interested in biasing some objectives more than others, a suitable weight vector can be used with the RP selected.

When using a RP search procedure, the RP can be given interactively, preferably with the approximated nadir and ideal objective vectors presented to the DM to illustrate the ranges of ℤm. The interactive procedure starts with the DM specifying a RP, z¯. By solving the ASF with a given z¯ and wi, a z* will be located on EF as shown in Figure 3. If the DM is not satisfied with the Pareto-optimal solutions generated, a new reference point can be suggested and the above procedure is repeated. In this way, reference points can be created, adapted or deleted interactively during the optimization run. By repeating the procedure with different reference points, the DM tries to evaluate the region of Pareto-optimality, instead of one particular Pareto-optimal point.

To make a RP procedure more interactive and useful in practice, Wierzbicki (1980) also suggested the generation of m more new RPs by using the following equation:

(4) z¯j=z¯+dej

where d = ‖z¯ − z*‖ and ej is the j-th unit vector for j = 1,…, m. By using this calculation, z¯j represents the newly generated RP along the j-th coordinate direction vector. The new m RPs can be used to find m more z* as shown in Figure 3. In this way, the new m RPs will form new ASFs so that m additional z*, instead of only one single Pareto-optimal solution, can be found. As d is proportional to the distance between z¯ and z*, an interesting effect with this RP approach is that a RP which is further away from the EF (e.g. z¯1 in Figure 3) results in larger spaced auxiliary reference points and thus can yield solutions with wider spread (s1), compared to z¯2 which is closer to EF and finds less-spaced auxiliary reference points (hence solutions with less spread, s2).

Depending on the ASF selected, the solutions found by a RP-based method can be weakly or є-properly Pareto-optimal. For example, to avoid the generation of weakly Pareto-optimal solutions, a sufficiently small positive scalar ρ, can be incorporated into equation (5):

(5) Minimize maxi=1mwifix-z¯i+ρi=1mfix-z¯i

On the other hand, one common approach to avoid using the weights, wi, as additional preference information, is to calculate wi by normalizing each objective function (Miettinen, 1999; López-Jaimes and Coello Coello, 2014):

(6) wj=1zjnadzj**

Here z** ∈ ℤm is called a utopian, infeasible objective vector whose components are formed by zj**=zjidl-εj, using the correspondent j component of the ideal point ( zjidl) and a relatively small (but computationally significant) scalar εj for all j = 1,…, m. The basic idea is to avoid the case if the nadir point is equal to or very close to the ideal point.

It is interesting to note that without using the absolute value of (fi(x) - z¯i) in the ASF function [equation (3)], the RP approach can be used for both feasible RP and infeasible RP (infeasible in every component i). A feasible RP means that ∃ z∈ℤm  s.t. zz¯, i.e. there is at least one solution that dominates z¯. In other words, an infeasible point is a z¯ that cannot be achieved from any solution in the feasible objective space. If a reference point is feasible and is not a Pareto-optimal solution, the DM may then be interested in knowing about solutions which are Pareto-optimal and close to the reference point. On the other hand, if z¯ is an infeasible RP, the DM would be interested in finding Pareto-optimal solutions which are close to it. As suggested in the MCDM literature (Miettinen, 1999), a DM should never be pessimistic when choosing a RP to indicate his/her preference. Therefore, starting with some infeasible RP is recommended. Figure 3 also illustrates that an infeasible RP which is further away from the EF, such as z¯1, is desirable in the early interactive process, to explore a wider region of the EF. In general, as mentioned previously, how a DM can interactively vary the preferences during the optimization process to learn the behavior of the system under study through exploring the objective space is an important motivation for the use of a RP approach in the guided search, compared to other MCDM methods.

As described in Goulart and Campelo (2016), any Pareto-based algorithm can be adapted to incorporate the RP approach. The earliest use of RP to guide an EMO algorithm was proposed by Fonseca and Fleming (1993, 1998). In the current paper, we decided to investigate the combined effects of the DR and RP approaches on an existing, efficient and well-known EMO algorithm, namely, NSGA-II. Therefore, the RP extension of NSGA-II, i.e. R-NSGA-II (Deb et al., 2006), is further extended with three different DR mechanisms for running the experiments and then comparing the results with the original R-NSGA-II (no DR) and NSGA-II (with or without DR but no RP). In the next section, we introduce the three DR mechanisms before presenting the details of the DR-augmented R-NSGA-II in Section 5.

4. Dynamic resampling

The simplest resampling mechanism is to distribute the total amount of the replication budget equally among the evaluated solutions. This is a popular method among simulation-based optimization practitioners because it requires minimal implementation effort. If r denotes replication runs for a single solution, x, and B is the total replication budget, then for such a static resampling mechanism, N = B/r solutions can be evaluated. While increasing r can improve the accuracy of evaluating F^(x), this will reduce N. At the same time, the replications that are assigned to evaluate non-optimal solutions will be wasted. In comparison to such a static resampling mechanism, the basic idea of DR is to dynamically allocate ri for a solution i, in a dynamic basis. An intuitive DR procedure would be to reduce the standard error until it drops below a certain user-defined threshold. The required sampling budget for obtaining the reduction in threshold standard error (et) can be calculated as:

(7) bs>(σx(fi(x))et)2

However, as the sample mean changes as new samples are added, this one-shot sampling allocation might not be optimal. The number of fitness samples drawn might be too small to reach the error threshold, in case the sample mean has been shown to be larger than the initial estimate. On the other hand, a one-shot strategy might add too many samples, if the initial estimate of the sample mean is too big. Therefore, DR is often done sequentially. Through this sequential approach, the number of required samples can be determined more accurately. Therefore, DR techniques can be classified into either one-shot sampling strategies or sequential sampling strategies. Some other sequential sampling-based algorithms proposed can be found in Branke and Schmidt (2004) and Bartz-Beielstein et al. (2007).

In the following sub-sections, we introduce the three different DR mechanisms based on various combinations of optimization time, distance to the RP and rank in the non-domination sorting of the solutions. In the following, we denote:

  • B = maximum overall number of simulation runs, i.e. total budget, and Bt = current overall number of simulation runs and BF = final samples for replicating the final set of solutions.

  • The number of sample(s) that should be allocated for a solution x, calculated in a DR iteration, is bs. Minimum and maximum number of samples for an individual solution: bmin and bmax. In other words, for static resampling, each solution will have a constant sampling budget, bs = bmin = bmax.

  • Acceleration parameter for the increase of the sampling budget: a > 0 – increasing a decreases the acceleration of the sampling budget, decreasing a increases the acceleration.

  • The calculated normalized sampling need, s, is discretized in the following way which guarantees that bmax is already assigned for s < 1:

(8) bs=min{bmax,s(bmaxbmin+1+bmin}

4.1 Time-based resampling

Time-based dynamic resampling or TDR (Siegmund et al., 2013) is a generally applicable DR mechanism which can be used with any optimization algorithm. It assumes that the accuracy of the objective values should be increased gradually when the optimization converges so that the maximum number of replications should be used only toward the end of the optimization run. This hypothesis was proven in Aizawa and Wah (1993, 1994) for single-objective EAs. Time-based dynamic resampling allocates a small sampling budget at the beginning and a high sampling budget towards the end of the optimization. The strategy of this resampling algorithm is to support the algorithm, when the solutions in the population are close to the Pareto-front, and to save sampling budget at the beginning of the optimization when the solutions are still far away from the Pareto-front. The normalized time-based resampling need, sT, is calculated as:

(9) sT=(BtB)a

Time-based resampling is a DR technique that does not consider the variance of the solutions. It makes only one sampling decision for each solution according to the elapsed time, i.e. a one-shot allocation strategy. However, if a solution survives into the next generation of an evolutionary algorithm, the decision is made again. The total number of samples available for optimization is limited by the final samples, BF, added to the last population. For time-based resampling, this means the elapsed time needs to be adjusted so that 100 per cent of the time has already been reached when BBF samples have been used. The modified allocation function is shown in equation (10). This function is used in all the other resampling algorithms that apply the elapsed optimization time as one of their resampling criteria.

(10) bs=min{1,(BtBBF)a}

4.2 Distance rank-based dynamic resampling (DR2)

An earlier attempt to combine multiple different resampling criteria in a resampling algorithm to combine the rank-based dynamic resampling and the distance-based dynamic resampling (DDR) strategies, so called distance-rank dynamic resampling (DR2), was reported in Siegmund et al. (2015). It uses four different factors to determine the resampling allocation for individual solutions: Pareto-rank, time, reference point distance and progress. Although the elapsed optimization time can be combined with any other resampling criterion with little effort, as seen with rank-time-based dynamic resampling, the Pareto-rank and the reference point distance are two truly different resampling criteria. Therefore, the term hybrid is emphasized for the DR2 algorithm.

The allocation function for rank-based DR is given in equation (11). Rs stands for the Pareto-rank of solution s and Rmax is the maximum Pareto-rank of the population. Equation (12) shows the distance-based allocation of the DDR sampling algorithm. The reference point distance of solution s is denoted as ds and δ¯ is the current minimum reference point distance achieved by any solution in the population. The mechanisms regarding how we considered the elapsed optimization time and the population progress toward the reference point in combination with the reference point distance in equation (12), are described in Siegmund et al. (2013).

(11) bsR=1(Rs1Rmax1)a
(12) bsDDR=min{1,(1ds1δ¯)a}
(13) bsDR2=min{bmDDR,bsR}

Equation (13) describes the sampling allocation of DR2. Like rank-time-based resampling, the minimum of both the normalized sampling need of distance-based resampling and rank-based resampling is used to create a combined sampling allocation. However, the distance-based sampling need is not calculated individually for each solution. Instead, DR2 identifies the solution sm closest to R, dm = mink{dk}, and the normalized sampling need for xmDDR is used in the formula as a fixed value for all solutions in the current generation. This way of combining rank and distance information was chosen for DR2, as it has shown the best optimization results in different scenarios.

4.3 Distance-based OCBA-m dynamic resampling (D-OCBA-m)

With the distance information available toward the preferred area, a resampling algorithm can allocate samples to enable an optimal comparison between solutions regarding distance. A comparison of solutions within a stochastic simulation scenario requires considering information about their objective variance. With this information, a statistically significant calculation can be made regarding which one of two solutions is closer to the reference point. For this purpose, we proposed the use of a Ranking and Selection algorithm called OCBA-m (Chen et al., 2008) to determine the solution subset of size m which contains the closest solutions to the RP with a desired accuracy. The Optimal Computing Budget Allocation is an R&S algorithm, which allocates samples to the considered designs according to a scalar objective value. While the original OCBA procedure by Chen et al. (2000) allocates samples to identify the best design according to the single scalar objective value, the OCBA-m algorithm described in (Chen et al., 2008) extended the original algorithm to identify the best m designs. As in our case we treat the scalar objective value for OCBA-m as the distance to the RP and its variance according to all calculated objective value samples so far, the variation is therefore called distance-based OCBA-m dynamic resampling or D-OCBA-m in below.

The sampling allocation problem that is to be solved by D-OCBA-m is stated in equation (14) (Siegmund et al., 2016). The corresponding notation is given in Table I. It is solved asymptotically, which means D-OCBA-m re-calculates the allocation iteratively, after having received new objective value evaluation data.

(14) max N 1N| S | skSmP{ d¯k c } skSmP{ d¯k c }(14)


c=σ¯km+1d¯km+σ¯kmd¯km+1σ¯km+σ¯km+1 s.t. k=1|S|NkNtotal, n0Nkbmax⁡, n0Nkbmax⁡, k = 1, …,|S|

Consequently, in the list of solutions sorted by reference point distance, there are m − 1 solutions that are closer to z¯ than for solution m. The index km+1 belongs to the next best solution in the ordered list. This solution and all other worse solutions are not selected into the next population. Note also that n0 is for bounding Nk; it is related to bmin but if bmin = 1 then n0 should be bigger to estimate the standard deviation.

Based on the Karush–Kuhn–Tucker condition, the allocation is optimal if the following holds for all solutions i and j, ij (Chen et al., 2008):

(15) NiNj=(σi/δ¯iσj/δ¯j)
(15) δ¯i=d¯ic

This allocation can be achieved by selecting one solution arbitrarily (let it be solution s) and putting all other in relation to it, as in the equation below:

(16) βi=NiNs,i=1,,k.

The allocation for solution i, with respect to the problem constraint, k=1|S|NkNtotal, can now be calculated approximately as in equation (17). This allocation will most probably not be optimal, as σi and δi can only be estimated and cannot be known accurately. This allocation is done on the basis of the n0 initial samples for each solution. After the sampling based on the allocation in equation (15) has been performed, a repeated allocation would yield very different new Ni. Ntotal needs to be chosen as Ntotal > |S|n0 to allow the minimum required allocation n0 for all solutions. By making use of the relation that βi is a ratio of Ni and Ns, then Ni can be calculated as follows:

(17) Ni=βijβjNtotal=NiNsjNjNsNtotal=NijNjNtotal

Equation (17) can be used because Ni should be assigned its correct share of Ntotal. For more details of the iterative D-OCBA-m procedure, readers are referred to Siegmund et al. (2016).

5. DR-Augmented R-NSGA-II

R-NSGA-II is based on the non-dominated sorting genetic algorithm II, NSGA-II (Deb et al., 2002), which is a widely used and representative EMO algorithm. NSGA II sorts the solutions in population and offspring into different non-dominated fronts. Those selected comprise all solutions in those best fronts that completely fit in size into the next population – if population size is P and the number of solutions in those selected best fronts is Q, where PQ. However, in many cases Q > P, so that not all fronts can be accommodated into the next population. Then a subset of solutions which have the largest distances to their neighbors will be selected into the next population. This selection mechanism is called crowding distance and it guarantees that the result population will be diverse. The parental selection for offspring generation follows this fitness hierarchy. After selection has been completed, offspring solutions are generated by tournament parental selection, crossover and mutation. The offspring are evaluated and the selection step is performed again. The R-NSGA-II algorithm replaces the crowding distance operator by the distance to user-specified RP(s). Solutions that are closer to a RP will receive a higher selection priority. As shown previously, the DM defines the RPs in areas of interested to find more alternative solutions in some limited regions of the objective space. However, in order not to lose all diversity, a clustering mechanism with a minimum objective vector distance ε is used. Solutions that are too close to each other (d < ε) are considered with a lower priority as illustrated in Figure 4. The same fitness hierarchy, dominance first, then reference point distance and ε-clustering, is used for parental selection.

The RPs can be created, adapted or deleted interactively during the optimization run, taking effect each time a new generation is started. In case all RPs have been removed, R-NSGA-II will continue the optimization as an instance of the regular NSGA-II algorithm. As R-NSGA-II uses non-domination sorting, it tends to prioritize population diversity before convergence to the RPs. Therefore, extensions have been proposed which limit the influence of the Pareto-dominance and allow the algorithm to focus faster on the RPs (Siegmund et al., 2012).

The R-NSGA-II algorithm is described in pseudocode in Algorithm 1; the steps that are augmented with a DR mechanism are highlighted. As R-NSGA-II is a population-based EA, the DR algorithm should aim at supporting the correct selection of the best solutions for the next generation.

Compared to its predecessor, NSGA-II, less reporting has been done on the application of R-NSGA-II in solving real-world problems. One application that gained some publicity was reported in Chikumbo et al. (2012) and concerned solving a land use management problem in New Zealand, which comprised 14 objectives and 3,150 integer variables. Some other applications using RP-based EMO, in particular R-NSGA-II that can be found are logistics network design (Rajabalipour et al., 2014), reversible logic circuit design (Wang and Wang, 2014) and work-flow scheduling in computing grid (Garg, 2011; Garg and Singh, 2012; Navaz and Ansari, 2012) or cloud computing (Verma and Kaushal, 2015). In manufacturing, there are very few reported applications of R-NSGA-II, except, for instance, tool sequence and parameter optimization of rough milling (Churchill et al., 2013a, 2013b). To our best knowledge, the application studies presented in the next section are among the first ones testing R-NSGA-II on complex production system models.

6. Industrial-scale applications and results

This section addresses the experimental setup and results of using the DR-augmented R-NSGA-II on two real-world industrial application studies, both found in the automotive industry. The optimizations have the same aim – to effectively identify the exact areas of optimal improvements required to reach the desired target condition within a limited simulation budget, B = 5000.

The first model represents a complex assembly line comprising 71 workstations (i.e. n = 71), 72 buffers, pallets for carrying two variants for assembly and disassembly processes. The reduction for each αi (processing time) is 20%; improvement in availability for βi from its current value to 0.98 and repair times (γi) reduced by 50%, i = 1, , n. The second complex production system is an automotive engine machining line that includes multiple parallel sections, gantry robots, machining centers and assembly stations which conduct multiple operations. Apart from some maintenance issues that affect the availabilities of the machines, multiple variants have to be processed on the line and variations in the weekly volume contribute to the higher variability of the system compared to the previously mentioned assembly line. The crucial issue for the company was the poor production capacity of this line, far below the capacity required to maintain production; a lean improvement project was in order. However, given the size, complexity and variability of the line, locating where and what to improve, besides the effect of performing the improvements if only traditional Lean tools, e.g. value stream mapping, were to be used, was believed to be extremely difficult. For these two cases, it was obvious that not a single, but multiple improvements had to be made to achieve the targeted throughput levels. The first model has been described with more details in Bernedixen et al. (2015) where discussions on the importance of avoiding any manually tedious and error-prone optimization setup by using some software support are presented. In addition, the engine machining line under study is the same one that is presented in Ng et al. (2014). While that article is focused on the relationship between bottleneck improvements using NSGA-II, the current paper mainly focuses on the results of applying the DR-augmented R-NSGA-II. Various versions of the simulation models have been built, but the latest complete models for the optimization studies were developed using FACTS Analyzer (Ng et al., 2007, 2008). Figure 5 illustrates the FACTS Analyzer model developed for the assembly line using standard modeling objects provided by the software, including multiple Operation, Buffer, Assembly, Disassembly and CONstant Work-In-Process (CONWIP) Control objects. As a typical discrete-event simulation model built for studying improvement, the complexity of the model is not in the products flow, but rather in the scale of the model that many attributes can be subjected to change. Using trial-and-error simulation experiments to evaluate all these possible changes is not only too time-consuming but also a waste of the potential offered by the valid model.

The five algorithm configurations in the experiments were as follows:

  1. NSGA-II with static resampling and 10 samples (Static10).

  2. R-NSGA-II - Static10.

  3. R-NSGA-II - Time-based Dynamic Resampling 1-10.

  4. R-NSGA-II - Distance Rank-based Dynamic Resampling (DR2) 1-10.

  5. R-NSGA-II - Distance-based OCBA-m Dynamic Resampling (D-OCBA-m) 1-10.

Two different experiment setups were run:

  1. An optimistic reference point closes to the ideal point, z¯1.

  2. z¯2, a reference point above the maximum achievable throughput, τmax.

The reference points for the assembly line were z¯1 = (0, 100) and z¯2 = (100, 110) and for the engine machining line model were z¯1 = (0, 100) and z¯2 = (400, 100).

As stated, both optimization studies were allowed to run with B = 5000. The results from five replicated optimizations are illustrated using median attainment surfaces (Knowles, 2005). The results from the two experiment setups for the two industrial application cases with the five algorithm configurations are shown in Figures 6-10, where the minImp-maxTP plots represent the two objectives used in all the optimizations, namely, Minimize(Improvement Actions) as defined in equation (2) and Maximize(Throughput).

For the assembly model with two different RPs (Figures 6 and 7), distinguishing the performance among the three DR mechanisms seems to be difficult. Unexpectedly, the difference between R-NSGA-II Static10 and NSGA-II Static10 is not significant. However, what can be concluded here is that, in general, DR-based algorithms perform significantly better than Static Resampling (Static10). For the case with RP z¯2, D-OCBA-m performs best, and there is no clear advantage with using Time-based DR or DR2 (Figure 7). R-NSGA-II Static10, however, is clearly better than NSGA-II for the solutions close to z¯2.

On the other hand, the result variances from using different DR mechanisms are more pronounced for the engine machining line. In Figure 8, the attainment surface results for the optimistic RP (z¯1, closer to the ideal point) have shown that the best performance can be achieved by the R-NSGA-II with DR2 as the DR mechanism, followed by D-OCBA-m and time-based DR. The higher throughput performance obtained with R-NSGA-II DR2 can be statistically verified using the confidence-based dominance post-optimality analysis, first introduced in Ng et al. (2008) and later used for a systematic comparison of production control mechanisms in the objective space using NSGA-II (Ng et al., 2012). The basic idea of confidence-based dominance is to consider the statistical significance when determining whether an objective function of a solution, i.e. a design vector x1, is better than another solution vector x2, before a dominance relationship (e.g. x1 ≼ x2, iff f1(x1) ≤fi(x2) ∀i ∈ {1,…, m}, but ∃j ∈ {1,…,m} s.t. f1(x1) < fi(x2)) can be concluded as described earlier in Section 2. In comparison to the minCT-maxTP, i.e. Minimize(Cycle Time) against Maximize(Throughput) analysis considered in (Ng et al., 2012), the analysis for the minImp-maxTP plots in the current paper is simpler because only the objective function value of system throughput is subjected to uncertainty from stochastic simulation. In other words, the Welch difference test used in (Ng et al., 2012) and other simulation literature is applied only to f2(x) as below:

(18) f2x1-f2x2±tf^,1-α2σ12n1+σ22n2

where f2(x1) and f2(x2) represent the mean values of the objective function, f2(x), (i.e. maxTP) of the two solutions, x1 and x2, respectively, under the statistical difference test; σ1 and σ2 are their standard deviations, calculated using n1 and n2 as the numbers of replications, respectively, which set by the algorithms for generating the solutions. The student-t distribution value to be set for a (1 − α)% confidence has to be estimated with a degree of freedom f^, computed and round-down using the following equation:

(19) f^=σ12n1+σ22n22σ12n12n1-1+σ22n22n2-1

Provided that the simulation outputs conform to normality, the use of the Welch difference test is extremely useful for the comparison of two solutions, x1 and x2, generated with different numbers of replications (i.e. n1n2) because of the applications of DR during the optimization processes. By making a complete pairwise comparison with all the Pareto-optimal solutions, z*t, generated with different algorithms in different, multiple optimization runs, the outstanding performance of DR2 in finding solutions that have higher throughput in the region closer to z¯1 can be statistically verified by the visualization of the significantly non-dominated solutions as shown in Figure 9. In this figure, only the solutions that are statistically better in their minImp-maxTP values, among all the generated z*t, after the Welch difference test, are visualized. Error bars showing the ± 95% standard errors of the significantly non-dominated solutions obtained from the confidence-based dominance post-optimality analysis are also plotted with these solutions for representing the variability of the simulation outputs.

For the second scenario with z¯2 as the RP, similar results have been achieved, as shown in Figure 10. However, in this case, the performance differences between the DR mechanisms on R-NSGA-II are insignificant and therefore we can assume they perform equally well. The significantly better results of DR algorithms, especially DR2 on the engine machining line are due to the higher stochasticity of the model (and the real line) caused by the higher variant-dependent processing times at the workstations and lower machine availabilities. In general, R-NSGA-II outperforms NSGA-II, while the DR algorithms on R-NSGA-II outperform static resampling.

7. Conclusions and future work

This paper has discussed and illustrated the advantages of combining the RP-based approach and DR mechanisms to enhance the performance of EMO algorithms for the optimizations of production system improvement that involve stochastic, computationally expensive simulation evaluations. Specifically, experimental, comparative evaluations of the performance enhancements on R-NSGA-II augmented with three different DR mechanisms, with increasing degrees of statistical sophistication, namely, time-based, distance-rank and OCBA, which were applied to an assembly line and an engine machining line have been conducted. The conflicting optimization objectives were the throughput of the lines and the necessary improvements required to enhance them. The best results illustrated in the attainment surface plots were achieved by using R-NSGA-II with DR2 as the DR mechanism, followed by D-OCBA-m and time-based DR. As a general conclusion, DR-based algorithms perform significantly better than algorithms with static resampling. In addition, the more stochasticity that the simulation models exert, the more the statistically advanced DR could significantly enhance the performance of the optimization process. As a result, decision makers can find better trade-off solutions with lower computational budgets. Future work involves extensive experiments on both hypothetical and real-world models that exert various levels of stochasticity using the DR-based algorithms described in this paper.


Objective space and EF by formulating a production systems improvement problem into a MOP

Figure 1.

Objective space and EF by formulating a production systems improvement problem into a MOP

Possible “knee” points around τt in the objective space of the MOP for production systems improvement

Figure 2.

Possible “knee” points around τt in the objective space of the MOP for production systems improvement

Illustration of the terminology and various properties of the RP approach.

Figure 3.

Illustration of the terminology and various properties of the RP approach.

Illustration of ϵ-clustering in R-NSGA-II: s1 and s2 are excluded as their distances to other solutions < ε

Figure 4.

Illustration of ϵ-clustering in R-NSGA-II: s1 and s2 are excluded as their distances to other solutions < ε

FACTS Analyzer model developed for the assembly line optimization

Figure 5.

FACTS Analyzer model developed for the assembly line optimization

Median attainment surface results from the assembly model with z¯1 = (0, 100)

Figure 6.

Median attainment surface results from the assembly model with z¯1 = (0, 100)

Median attainment surface results from the assembly model with z¯2 = (100, 110)

Figure 7.

Median attainment surface results from the assembly model with z¯2 = (100, 110)

Median attainment surface results from the engine machining model with z¯1 = (0, 100)

Figure 8.

Median attainment surface results from the engine machining model with z¯1 = (0, 100)

Confidence-based post-optimality analysis results from the engine machining model with z¯1 = (0, 100)

Figure 9.

Confidence-based post-optimality analysis results from the engine machining model with z¯1 = (0, 100)

Median attainment surface results from the engine machining model with z¯2 = (400, 100)

Figure 10.

Median attainment surface results from the engine machining model with z¯2 = (400, 100)

Notation for D-OCBA-m

d¯k Mean of distance of the objective vector of sol. sk to z¯, dk=d(sk, z¯)
σk Standard deviation of the stochastic variable dk
Nk No. of samples for solution k
σ¯km+1 Standard deviation of the mean d¯k s.t. σ¯k=σkNk
n0 Minimum number of samples for all solutions
Ntotal Total budget samples allocated for this iteration
S The set of all considered solutions (e.g. parents and offspring in an EMO)
Sm A subset of S which contains the best m solutions (closest to z¯)


Aizawa, A.N. and Wah, B.W. (1993), “Dynamic control of genetic algorithms in a noisy environment”, in Proceedings of the Fifth International Conference on Genetic Algorithms, pp. 48-55.

Aizawa, A.N. and Wah, B.W. (1994), “Scheduling of genetic algorithms in a noisy environment”, Evolutionary Computation, Vol. 2 No. 2, pp. 97-122.

Bandaru, S., Ng, A.H.C. and Deb, K. (2017), “Data mining methods for knowledge discovery in Multi-Objective optimization: Part a – survey”, Expert Systems with Applications, Vol. 70, pp. 139-159.

Bartz-Beielstein, T., Blum, D. and Branke, J. (2007), “Particle swarm optimization and sequential sampling in noisy environments”, Metaheuristics – Progress in Complex Systems Optimization, Operations Research/Computer Science Interfaces Series, Springer, Boston, MA. Vol. 39, pp. 261-273.

Bernedixen, J., Pehrsson, L., Ng, A.H.C. and Antonsson, T. (2015), “Simulation-based multi-objective bottleneck improvement: towards an automated toolset for industry”, in Winter Simulation Conference (WSC), IEEE, pp. 2183-2194.

Branke, J. and Schmidt, C. (2004), “Sequential sampling in noisy environments”, in Proceedings of the 8th International Conference on Parallel Problem Solving from Nature, 2004, Birmingham, Lecture Notes in Computer Science, Vol. 3242, pp. 202-211.

Branke, J., Deb, K., Dierolf, H. and Osswald, M. (2004), “Finding knees in multi-objective optimization”, in International Conference on Parallel Problem Solving from Nature, Springer Berlin Heidelberg.

Branke, J. (2008), “Consideration of partial user preferences in evolutionary multiobjective optimization”, in Multiobjective Optimization: Interactive and Evolutionary Approaches, Branke, J., Deb, K., Miettinen, K., Slowinski, R. (Eds.), Springer-Verlag, pp. 157-178.

Chen, C.H., He, D., Fu, M. and Lee, L.H. (2000), “Efficient simulation budget allocation for selecting an optimal subset”, Informs Journal on Computing, Vol. 20 No. 4, pp. 579-595.

Chen, C.H., Lin, J., Yucesan, E. and Chick, S.E. (2008), “Simulation budget allocation for further enhancing the efficiency of ordinal optimization”, Discrete Event Dynamic Systems, Vol. 10 No. 3, pp. 251-270.

Chen, C.H. and Lee, L.H. (2010), Stochastic Simulation Optimization – An Optimal Computing Budget Allocation, World Scientific Publishing Company.

Chikumbo, O., Goodman, E. and Deb, K. (2012), “Approximating a multi-dimensional pareto front for a land use management problem: a modified MOEA with an epigenetic silencing metaphor”, IEEE Congress on Evolutionary Computation, CEC 2012, pp. 10-15.

Churchill, A.W., Husbands, P. and Philippides, A. (2013a), “Multi-objective tool sequence and parameter optimization for rough milling applications”, IEEE Congress on Evolutionary Computation, CEC 2013, pp. 1475-1482.

Churchill, A.W., Husbands, P. and Philippides, A. (2013b), “Tool sequence optimisation using preferential multi-objective search”, in Proceeding of the Fifteenth Annual Conference Companion on Genetic and Evolutionary Computation Conference Companion – GECCO, Amsterdam, The Netherlands, 2013, pp. 181-182.

Deb, K., Agarwal, S., Pratap, A. and Meyarivan, T. (2002), “A fast and elitist multiobjective genetic algorithm: NSGA-II”, IEEE Transactions on Evolutionary Computation, Vol. 6 No. 2, pp. 182-197.

Deb, K. (2004), Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley and Sons Ltd.

Deb, K., Sundar, J., Reddy, U.B. and Chaudhuri, S. (2006), “Reference point based multiobjective optimization using evolutionary algorithms”, International Journal of Computational Intelligence Research, Vol. 2 No. 3, pp. 273-286.

Deb, K. and Gupta, S. (2011), “Understanding knee points in bicriteria problems and their implications as preferred solution principles”, Engineering Optimization, Vol. 43 No. 11, pp. 1175-1204.

Fonseca, C.M. and Fleming, P.J. (1993), “Genetic algorithms for multiobjective optimization: formulation, discussion, and generalization”, International Conference on Genetic Algorithms, pp. 416-423.

Fonseca, C.M. and Fleming, P.J. (1998), “Multiobjective optimization and multiple constraint handling with evolutionary algorithms - Part I: a unified formulation”, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, Vol. 28 No. 1, pp. 26-37.

Garg, R. (2011), “Multi-Objective optimization to workflow grid scheduling using reference point based evolutionary algorithm”, International Journal of Computer Applications, Vol. 22 No. 6, pp. 44-49.

Garg, R. and Singh, A.K. (2012), “Reference point based multi-objective optimization to workflow grid scheduling”, International Journal of Applied Evolutionary Computation ( Computation), Vol. 3 No. 1, pp. 80-99.

Goulart, F. and Campelo, F. (2016), “Preference-guided evolutionary algorithms for many-objective optimization”, Information Sciences, Vol. 329, pp. 236-255.

Kleijnen, J.P.C. (2015), “Design and analysis of simulation experiments”, 2nd edition. International Series in Operation Research and Management Science, Springer AG: Switzerland.

Knowles, J. (2005), “A summary-attainment-surface plotting method for visualizing the performance of stochastic multiobjective optimizers”, in Proceedings of the 5th International Conference on Intelligent Systems Design and Applications (ISDA), 8-10 September 2005, Wroclaw, Poland, IEEE Computer Society, pp. 552-557.

López-Jaimes, A. and Coello Coello, C.A. (2014), “Including preferences into a multiobjective evolutionary algorithm to deal with many-objective engineering optimization problems”, Information Sciences, Vol. 277, pp. 1-20.

Miettinen, K. (1999), Nonlinear Multiobjective Optimization, Kluwer Academic Publishers.

Miettinen, K., Ruiz, F. and Wierzbicki, A.P. (2008), “Introduction to multiobjective optimization: Interactive approaches”, in Multiobjective Optimization: Interactive and Evolutionary Approaches, Branke, J., Deb, K., Miettinen, K., Slowinski, R. (Eds.), Springer-Verlag, pp. 27-58.

Mkaouer, M.W., Kessentini, M., Bechikh, S. and Tauritz, D.R. (2013), “Preference-based multi-objective software modelling”, in Proceedings of 1st International Workshop on Combining Modelling and Search-Based Software Engineering, CMSBSE 2013, pp. 61-66.

Nakayama, H. (1995), “Aspiration level approach to interactive multi-objective programming and its applications”, in Advances in Multicriteria Analysis, Pardalos, P.M., Siskos, Y., Zopounidis, C. (eds.), Kluwer Academic Publishers, Dordrecht, pp. 147-174.

Navaz, S. and Ansari, U. (2012), “An evolutionary algorithm in grid scheduling by multiobjective optimization using variants of NSGA”, International Journal of Scientific and Research Publications, Vol. 2 No. 9, pp. 1-5.

Negahban, A. and Smith, J.S. (2014), ” “Simulation for manufacturing system design and operation: Literature review and analysis”, Journal of Manufacturing Systems, Vol. 33 No. 2, pp. 241-261.

Ng, A.H.C., Bernedixen, J. and Pehrsson, L. (2014), “What does multi-objective optimization have to do with bottleneck improvement of production systems?”, in Proceedings of the Swedish Production Symposium, SPS’14, Swedish Production Academy, Gothenburg.

Ng, A.H.C., Bernedixen, J. and Syberfeldt, A. (2012), “A comparative study of production control mechanisms using simulation-based multi-objective optimization”, International Journal of Production Research, Vol. 50 No. 2, pp. 359-377.

Ng, A.H.C., Svensson, J. and Urenda, M. (2008), “Multi-Objective simulation optimization for production systems design using FACTS analyzer”, in Proceedings of the 2’nd Swedish Production Symposium (SPS’08), Stockholm, Sweden, 2-3 November 2008.

Ng A.H.C., Moris, M.U., Svensson, J., Skoogh, A. and Johansson, B. (2007), “FACTS analyzer: an innovative tool for factory conceptual design using simulation”, in Proceedings of the 1st Swedish Production Symposium (SPS07), Gothenburg, 28-30 august 2007, pp. 1-8.

Pehrsson, L., Ng, A.H.C. and Bernedixen, J. (2011), “Multi-objective production system optimisation including investment and running costs”, in Evolutionary Multi-Objective Optimization in Product Design and Manufacturing, L. Wang, A. Ng, K. Deb (Eds), Springer, pp. 431-454.

Pehrsson, L., Ng, A.H.C. and Stockton, D.J. (2013), “Industrial cost modelling for multi-objective optimisation of production systems”, International Journal of Computer and Industrial Engineering, Vol. 66 No. 4, pp. 1036-1048.

Pehrsson, L., Ng, A.H.C. and Bernedixen, J. (2016), “Automatic identification of constraints and improvement actions in production systems using multi-objective optimization and post-optimality analysis”, International Journal of Manufacturing Systems, Vol. 39, pp. 24-37.

Rosen, S.L., Harmonosky, C.M. and Traband, C.M. (2007), “A simulation optimization method that considers uncertainty and multiple performance measures”, European Journal of Operational Research, Vol. 181 No. 1, pp. 315-330.

Rother, M. (2009), Toyota Kata: Managing People for Improvement, Adaptiveness and Superior Results, McGraw-Hill.

Siegmund, F., Ng, A.H.C. and Deb, K. (2013), “A comparative study of dynamic resampling strategies for guided evolutionary multi-objective optimization”, in Proceedings of the Congress on Evolutionary Computation, Cancún, México, pp. 1826-1835.

Siegmund, F., Ng, A.H.C. and Deb, K. (2015), “Hybrid dynamic resampling for guided evolutionary multi-objective optimization”, in Proceedings of the 8th International Conference on Evolutionary Multi-Criterion Optimization, Guimarães, Portugal. Lecture Notes in Computer Science, pp. 366-380, Vol. 9018.

Siegmund, F., Ng, A.H.C. and Deb, K. (2016), “A ranking and selection strategy for preference-based evolutionary multi-objective optimization of variable-noise problems”, IEEE Congress on Evolutionary Computation, CEC 2016, Vancouver, Canada, July 24-29.

Siegmund, F., Bernedixen, J., Pehrsson, L., Ng, A.H.C. and Deb, K. (2012), “Reference point-based evolutionary multi-objective optimization for industrial systems simulation”, in Proceedings of the Winter Simulation Conference 2012, Berlin.

Simon, D. (2013), Evolutionary Optimization Algorithms: Biologically Inpsired and Population-Based Approaches to Computer Intelligence, John Wiley and Sons, NJ pp. 557-558.

Syberfeldt, A., Ng, A.H.C., John, R.I. and Moore, P. (2010), “Evolutionary optimisation of noisy multi-objective problems using confidence-based dynamic resampling”, European Journal of Operation Research, Vol. 204 No. 3, pp. 533-544.

Rajabalipour, C.H., Islam, M.N. and Desa, M.I. (2014), “A polar-based guided multi-objective evolutionary algorithm to search for optimal solutions interested by decision-makers in a logistics network design problem”, Journal of Intelligent Manufacturing, Vol. 25 No. 4, pp. 699-726.

Tempelmeier, H. (2003), “Practical considerations in the optimization of flow production systems”, International Journal of Production Research, Vol. 41 No. 1, pp. 149-170.

Wang, Q. and Chatwin, C.R. (2005), “Key issues and developments in modelling and simulation-based methodologies for manufacturing systems analysis, design and performance evaluation”, The International Journal of Advanced Manufacturing Technology, Vol. 25 Nos 11/12, pp. 1254-1265.

Wang, X. and Wang, X. (2014), “Reference point-based evolutionary multi-objective optimization for reversible logic circuit synthesis”, in The 7th International Conference on Biomedical Engineering and Informatics (BMEI), Dalian, China, 2014, pp. 955-959.

Verma, A. and Kaushal, S. (2015), “Cost-time efficient scheduling plan for executing workflows in the cloud”, Journal of Grid Computing, Vol. 13 No. 4, pp. 495-506.

Wierzbicki, A.P. (1980), “The use of reference objectives in multiobjective optimization”, in: G. Fandel and T. Gal, (eds), Multiple Criteria Decision Making Theory and Applications, Springer-Verlag, Berlin, pp. 468-486.

Wierzbicki, A.P. (1982), “A mathematical basis for satisficing decision making”, Mathematical Modelling, Vol. 3 No. 5, pp. 391-405.

Wierzbicki, A.P. (1986), “On the completeness and constructiveness of parametric characterizations to vector optimization problems”, Operations-Research-Spektrum, Vol. 8 No. 2, pp. 73-87.


This work was partially financed by the Knowledge Foundation (KKS), Sweden, through the Blixt-Sim project (2011-2014). The authors gratefully acknowledge their provision of research funding and thank the industrial partners, Volvo Car Corporation and AB Volvo, for their collaborative supports during the project.

Corresponding author

Amos H.C. Ng can be contacted at: