Search results
1 – 10 of 225Anand Amrit, Leifur Leifsson and Slawomir Koziel
This paper aims to investigates several design strategies to solve multi-objective aerodynamic optimization problems using high-fidelity simulations. The purpose is to find…
Abstract
Purpose
This paper aims to investigates several design strategies to solve multi-objective aerodynamic optimization problems using high-fidelity simulations. The purpose is to find strategies which reduce the overall optimization time while still maintaining accuracy at the high-fidelity level.
Design/methodology/approach
Design strategies are proposed that use an algorithmic framework composed of search space reduction, fast surrogate models constructed using a combination of physics-based surrogates and kriging and global refinement of the Pareto front with co-kriging. The strategies either search the full or reduced design space with a low-fidelity model or a physics-based surrogate.
Findings
Numerical investigations of airfoil shapes in two-dimensional transonic flow are used to characterize and compare the strategies. The results show that searching a reduced design space produces the same Pareto front as when searching the full space. Moreover, as the reduced space is two orders of magnitude smaller (volume-wise), the number of required samples to setup the surrogates can be reduced by an order of magnitude. Consequently, the computational time is reduced from over three days to less than half a day.
Originality/value
The proposed design strategies are novel and holistic. The strategies render multi-objective design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search spaces computationally tractable.
Details
Keywords
Haopeng Lou, Zhibin Xiao, Yinyuan Wan, Fengling Jin, Boqing Gao and Chao Li
In this article, a practical design methodology is proposed for discrete sizing optimization of high-rise concrete buildings with a focus on large-scale and real-life structures.
Abstract
Purpose
In this article, a practical design methodology is proposed for discrete sizing optimization of high-rise concrete buildings with a focus on large-scale and real-life structures.
Design/methodology/approach
This framework relies on a computationally efficient approximation of the constraint and objective functions using a radial basis function model with a linear tail, also called the combined response surface methodology (RSM) in this article. Considering both the code-stipulated constraints and other construction requirements, three sub-optimization problems were constructed based on the relaxation model of the original problem, and then the structural weight could be automatically minimized under multiple constraints and loading scenarios. After modulization, the obtained results could meet the discretization requirements. By integrating the commercially available ETABS, a dedicated optimization software program with an independent interface was developed and details for practical software development were also presented in this paper.
Findings
The proposed framework was used to optimize different high-rise concrete buildings, and case studies showed that material usage could be saved by up to 12.8% compared to the conventional design, and the over-limit constraints could be adjusted, which proved the feasibility and effectiveness.
Originality/value
This methodology can therefore be applied by engineers to explore the optimal distribution of dimensions for high-rise buildings and to reduce material usage for a more sustainable design.
Details
Keywords
Zoubida Chorfi, Abdelaziz Berrado and Loubna Benabbou
Evaluating the performance of supply chains is a convoluted task because of the complexity that is inextricably linked to the structure of the aforesaid chains. Therefore, the…
Abstract
Purpose
Evaluating the performance of supply chains is a convoluted task because of the complexity that is inextricably linked to the structure of the aforesaid chains. Therefore, the purpose of this paper is to present an integrated approach for evaluating and sizing real-life health-care supply chains in the presence of interval data.
Design/methodology/approach
To achieve the objective, this paper illustrates an approach called Latin hypercube sampling by replacement (LHSR) to identify a set of precise data from the interval data; then the standard data envelopment analysis (DEA) models can be used to assess the relative efficiencies of the supply chains under evaluation. A certain level of data aggregation is suggested to improve the discriminatory power of the DEA models and an experimental design is conducted to size the supply chains under assessment.
Findings
The newly developed integrated methodology assists the decision-makers (DMs) in comparing their real-life supply chains against peers and sizing their resources to achieve a certain level of production.
Practical implications
The proposed integrated DEA-based approach has been successfully implemented to suggest an appropriate structure to the actual public pharmaceutical supply chain in Morocco.
Originality/value
The originality of the proposed approach comes from the development of an integrated methodology to evaluate and size real-life health-care supply chains while taking into account interval data. This developed integrated technique certainly adds value to the health-care DMs for modelling their supply chains in today's world.
Details
Keywords
Andrew J. Collins, Michael J. Seiler, Marshall Gangel and Menion Croll
Agent-based modelling and simulation (ABMS) has seen wide-spread success through its applications in the sciences and social sciences over the last 15 years. As ABMS is used to…
Abstract
Purpose
Agent-based modelling and simulation (ABMS) has seen wide-spread success through its applications in the sciences and social sciences over the last 15 years. As ABMS is used to model more and more complex systems, there is going to be an increase in the number of input variables used within the simulation. Any uncertainty associated with these input variables can be investigated using sensitivity analysis, but when there is uncertainty surrounding several of these input variables, a single parameter sensitivity analysis is not adequate. Latin hypercube sampling (LHS) offers a way to sample variations in multiple parameters without having to consider all of the possible permutations. This paper introduces the application of LHS to ABMS via a case study that investigates the mortgage foreclosure contagion effect. This paper aims to discuss these issues.
Design/methodology/approach
Traditionally, uncertainty surrounding a single input variable is investigated using sensitivity analysis. That is, the variable is allowed to change to determine the impact of this variation on the simulation's output. When there is uncertainty about multiple input variables, then the number of simulation runs required to undertake this investigation greatly increases due to the permutations that need to be considered. LHS, which was first derived by McKay et al., offers a proven mechanism to reduce the number of simulation runs needed to complete a sensitivity analysis. This paper describes the LHS technique and its applications to an agent-based simulation (ABS) for investigating the foreclosure contagion effect.
Findings
The results from the foreclosure ABS runs have been characterized as “good”, “bad” or “ugly”, corresponding to whether or not a property market crash has occurred. As the only thing that can induce a property market crash within our model is the spread of foreclosing properties, these results indicate that the foreclosure contagion effect is dependent on how much impact a foreclosed property has on the price of the surrounding properties.
Originality/value
This paper describes the application of LHS to an agent-based foreclosure simulation. The foreclosure model and its results have been described in Gangel et al. Given a certain output “boundary” found within these results, it was highly appropriate to conduct an extensive sensitivity analysis on the simulation's input variables. The outcome of the LHS sensitivity analysis has given further insight into the foreclosure contagion effect thus demonstrating it was a beneficial exercise.
Details
Keywords
The purpose of study is to overcome the error estimation of standard deviation derived from Expected improvement (EI) criterion. Compared with other popular methods, a…
Abstract
Purpose
The purpose of study is to overcome the error estimation of standard deviation derived from Expected improvement (EI) criterion. Compared with other popular methods, a quantitative model assessment and analysis tool, termed high-dimensional model representation (HDMR), is suggested to be integrated with an EI-assisted sampling strategy.
Design/methodology/approach
To predict standard deviation directly, Kriging is imported. Furthermore, to compensate for the underestimation of error in the Kriging predictor, a Pareto frontier (PF)-EI (PFEI) criterion is also suggested. Compared with other surrogate-assisted optimization methods, the distinctive characteristic of HDMR is to disclose the correlations among component functions. If only low correlation terms are considered, the number of function evaluations for HDMR grows only polynomially with the number of input variables and correlative terms.
Findings
To validate the suggested method, various nonlinear and high-dimensional mathematical functions are tested. The results show the suggested method is potential for solving complicated real engineering problems.
Originality/value
In this study, the authors hope to integrate superiorities of PFEI and HDMR to improve optimization performance.
Details
Keywords
Fran Sérgio Lobato, Gustavo Barbosa Libotte and Gustavo Mendes Platt
In this work, the multi-objective optimization shuffled complex evolution is proposed. The algorithm is based on the extension of shuffled complex evolution, by incorporating two…
Abstract
Purpose
In this work, the multi-objective optimization shuffled complex evolution is proposed. The algorithm is based on the extension of shuffled complex evolution, by incorporating two classical operators into the original algorithm: the rank ordering and crowding distance. In order to accelerate the convergence process, a Local Search strategy based on the generation of potential candidates by using Latin Hypercube method is also proposed.
Design/methodology/approach
The multi-objective optimization shuffled complex evolution is used to accelerate the convergence process and to reduce the number of objective function evaluations.
Findings
In general, the proposed methodology was able to solve a classical mechanical engineering problem with different characteristics. From a statistical point of view, we demonstrated that differences may exist between the proposed methodology and other evolutionary strategies concerning two different metrics (convergence and diversity), for a class of benchmark functions (ZDT functions).
Originality/value
The development of a new numerical method to solve multi-objective optimization problems is the major contribution.
Details
Keywords
Stoyan Stoyanov, Chris Bailey and Marc Desmulliez
This paper aims to present an integrated optimisation‐modelling computational approach for virtual prototyping that helps design engineers to improve the reliability and…
Abstract
Purpose
This paper aims to present an integrated optimisation‐modelling computational approach for virtual prototyping that helps design engineers to improve the reliability and performance of electronic components and systems through design optimisation at the early product development stage. The design methodology is used to identify the optimal design of lead‐free (Sn3.9Ag0.6Cu) solder joints in fine‐pitch copper column bumped flip‐chip electronic packages.
Design/methodology/approach
The design methodology is generic and comprises numerical techniques for computational modelling (finite element analysis) coupled with numerical methods for statistical analysis and optimisation. In this study, the integrated optimisation‐modelling design strategy is adopted to prototype virtually a fine‐pitch flip‐chip package at the solder interconnect level, so that the thermal fatigue reliability of the lead‐free solder joints is improved and important design rules to minimise the creep in the solder material, exposed to thermal cycling regimes, are formulated. The whole prototyping process is executed in an automated way once the initial design task is formulated and the conditions and the settings for the numerical analysis used to evaluate the flip‐chip package behaviour are specified. Different software modules that incorporate the required numerical techniques are used to identify the solution of the design optimisation problem related to solder joints reliability optimisation.
Findings
For fine‐pitch flip‐chip packages with copper column bumped die, it is found that higher solder joint volume and height of the copper column combined with lower copper column radius and solder wetting around copper column have a positive effect on the thermo‐mechanical reliability.
Originality/value
The findings of this research provide design rules for more reliable lead‐free solder joints for copper column bumped flip‐chip packages and help to establish further the technology as one of the viable routes for flip‐chip packaging.
Details
Keywords
Tianyue Feng, Lihao Liu, Xingyu Xing and Junyi Chen
The purpose of this paper is to search for the critical-scenarios of autonomous vehicles (AVs) quickly and comprehensively, which is essential for verification and validation…
Abstract
Purpose
The purpose of this paper is to search for the critical-scenarios of autonomous vehicles (AVs) quickly and comprehensively, which is essential for verification and validation (V&V).
Design/methodology/approach
The author adopted the index F1 to quantitative critical-scenarios' coverage of the search space and proposed the improved particle swarm optimization (IPSO) to enhance exploration ability for higher coverage. Compared with the particle swarm optimization (PSO), there were three improvements. In the initial phase, the Latin hypercube sampling method was introduced for a uniform distribution of particles. In the iteration phase, the neighborhood operator was adapted to explore more modals with the particles divided into groups. In the convergence phase, the convergence judgment and restart strategy were used to explore the search space by avoiding local convergence. Compared with the Monte Carlo method (MC) and PSO, experiments on the artificial function and critical-scenarios search were carried out to verify the efficiency and the application effect of the method.
Findings
Results show that IPSO can search for multimodal critical-scenarios comprehensively, with a stricter threshold and fewer samples in the experiment on critical-scenario search, the coverage of IPSO is 14% higher than PSO and 40% higher than MC.
Originality/value
The critical-scenarios' coverage of the search space is firstly quantified by the index F1, and the proposed method has higher search efficiency and coverage for the critical-scenarios search of AVs, which shows application potential for V&V.
Details
Keywords
Rajendra Machavaram and Shankar Krishnapillai
The purpose of this paper is to provide an effective and simple technique to structural damage identification, particularly to identify a crack in a structure. Artificial neural…
Abstract
Purpose
The purpose of this paper is to provide an effective and simple technique to structural damage identification, particularly to identify a crack in a structure. Artificial neural networks approach is an alternative to identify the extent and location of the damage over the classical methods. Radial basis function (RBF) networks are good at function mapping and generalization ability among the various neural network approaches. RBF neural networks are chosen for the present study of crack identification.
Design/methodology/approach
Analyzing the vibration response of a structure is an effective way to monitor its health and even to detect the damage. A novel two‐stage improved radial basis function (IRBF) neural network methodology with conventional RBF in the first stage and a reduced search space moving technique in the second stage is proposed to identify the crack in a cantilever beam structure in the frequency domain. Latin hypercube sampling (LHS) technique is used in both stages to sample the frequency modal patterns to train the proposed network. Study is also conducted with and without addition of 5% white noise to the input patterns to simulate the experimental errors.
Findings
The results show a significant improvement in identifying the location and magnitude of a crack by the proposed IRBF method, in comparison with conventional RBF method and other classical methods. In case of crack location in a beam, the average identification error over 12 test cases was 0.69 per cent by IRBF network compared to 4.88 per cent by conventional RBF. Similar improvements are reported when compared to hybrid CPN BPN networks. It also requires much less computational effort as compared to other hybrid neural network approaches and classical methods.
Originality/value
The proposed novel IRBF crack identification technique is unique in originality and not reported elsewhere. It can identify the crack location and crack depth with very good accuracy, less computational effort and ease of implementation.
Details
Keywords
Weixin Zhang, Zhao Liu, Yu Song, Yixuan Lu and Zhenping Feng
To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most…
Abstract
Purpose
To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most suitable define for prediction work. This paper aims to create a generative surrogate model that can be applied on multi-objective optimization problems.
Design/methodology/approach
The latest backbone in the field of computer vision (Swin-Transformer, 2021) was introduced and improved as the surrogate function for prediction of the multi-physics field distribution (film cooling effectiveness, pressure, density and velocity). The basic samples were generated by Latin hypercube sampling method and the numerical method adopt for the calculation was validated experimentally at first. The training and testing samples were calculated at experimental conditions. At last, the surrogate model predicted results were verified by experiment in a linear cascade.
Findings
The results indicated that comparing with the Multi-Scale Pix2Pix Model, the Swin-Transformer U-Net model presented higher accuracy and computing speed on the prediction of contour results. The computation time for each step of the Swin-Transformer U-Net model is one-third of the original model, especially in the case of multi-physics field prediction. The correlation index reached more than 99.2% and the first-order error was lower than 0.3% for multi-physics field. The predictions of the data-driven surrogate model are consistent with the predictions of the computational fluid dynamics results, and both are very close to the experimental results. The application of the Swin-Transformer model on enlarging the different structure samples will reduce the cost of numerical calculations as well as experiments.
Research limitations/implications
The number of U-Net layers and sample scales has a proper relationship according to equation (8). Too many layers of U-Net will lead to unnecessary nonlinear variation, whereas too few layers will lead to insufficient feature extraction. In the case of Swin-Transformer U-Net model, incorrect number of U-Net layer will reduce the prediction accuracy. The multi-scale Pix2Pix model owns higher accuracy in predicting a single physical field, but the calculation speed is too slow. The Swin-Transformer model is fast in prediction and training (nearly three times faster than multi Pix2Pix model), but the predicted contours have more noise. The neural network predicted results and numerical calculations are consistent with the experimental distribution.
Originality/value
This paper creates a generative surrogate model that can be applied on multi-objective optimization problems. The generative adversarial networks using new backbone is chosen to adjust the output from single contour to multi-physics fields, which will generate more results simultaneously than traditional surrogate models and reduce the time-cost. And it is more applicable to multi-objective spatial optimization algorithms. The Swin-Transformer surrogate model is three times faster to computation speed than the Multi Pix2Pix model. In the prediction results of multi-physics fields, the prediction results of the Swin-Transformer model are more accurate.
Details