Search results

1 – 10 of 556
Article
Publication date: 24 May 2013

Jyri Leskinen, Hong Wang and Jacques Périaux

The purpose of this paper is to compare the efficiency of four different algorithmic parallelization methods for inverse shape design flow problems.

Abstract

Purpose

The purpose of this paper is to compare the efficiency of four different algorithmic parallelization methods for inverse shape design flow problems.

Design/methodology/approach

The included algorithms are: a parallelized differential evolution algorithm; island‐model differential evolution with multiple subpopulations; Nash differential evolution with geometry decomposition using competitive Nash games; and the new Global Nash Game Coalition Algorithm (GNGCA) which combines domain and geometry decomposition into a “distributed one‐shot” method. The methods are compared using selected academic reconstruction problems using a different number of simultaneous processes.

Findings

The results demonstrate that the geometry decomposition approach can be used to improve algorithmic convergence. Additional improvements were achieved using the novel distributed one‐shot method.

Originality/value

This paper is a part of series of articles involving the GNGCA method. Further tests implemented for more complex problems are needed to study the efficiency of the approaches in more realistic cases.

Article
Publication date: 4 February 2022

Arezoo Gazori-Nishabori, Kaveh Khalili-Damghani and Ashkan Hafezalkotob

A Nash bargaining game data envelopment analysis (NBG-DEA) model is proposed to measure the efficiency of dynamic multi-period network structures. This paper aims to propose…

Abstract

Purpose

A Nash bargaining game data envelopment analysis (NBG-DEA) model is proposed to measure the efficiency of dynamic multi-period network structures. This paper aims to propose NBG-DEA model to measure the performance of decision-making units with complicated network structures.

Design/methodology/approach

As the proposed NBG-DEA model is a non-linear mathematical programming, finding its global optimum solution is hard. Therefore, meta-heuristic algorithms are used to solve non-linear optimization problems. Fortunately, the NBG-DEA model optimizes the well-formed problem, so that it can be solved by different non-linear methods including meta-heuristic algorithms. Hence, a meta-heuristic algorithm, called particle swarm optimization (PSO) is proposed to solve the NBG-DEA model in this paper. The case study is Industrial Management Institute (IMI), which is a leading organization in providing consulting management, publication and educational services in Iran. The sub-processes of IMI are considered as players where their pay-off is defined as the efficiency of sub-processes. The network structure of IMI is studied during multiple periods.

Findings

The proposed NBG-DEA model is applied to measure the efficiency scores in the IMI case study. The solution found by the PSO algorithm, which is implemented in MATLAB software, is compared with that generated by a classic non-linear method called gradient descent implemented in LINGO software.

Originality/value

The experiments proved that suitable and feasible solutions could be found by solving the NBG-DEA model and shows that PSO algorithm solves this model in reasonable central process unit time.

Details

Journal of Modelling in Management, vol. 18 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 11 July 2018

Wanyue Jiang, Daobo Wang and Yin Wang

The purpose of this paper is to find a solution for the unmanned aerial vehicle (UAV) rendezvous problem, which should be feasible, optimal and not time consuming. In the existing…

Abstract

Purpose

The purpose of this paper is to find a solution for the unmanned aerial vehicle (UAV) rendezvous problem, which should be feasible, optimal and not time consuming. In the existing literatures, the UAV rendezvous problem is always presented as a matter of simultaneous arrival. They focus only on the time consistency. However, the arrival time of UAVs can vary according to the rendezvous position. The authors should determine the best rendezvous position with considering UAVs’ maneuver constraint, so that UAVs can construct a formation in a short time.

Design/methodology/approach

The authors present a decentralized method in which UAVs negotiate with each other for the best rendezvous positions by using Nash bargain. The authors analyzed the constraints of the rendezvous time and the UAV maneuver, and proposed an objective function that allows UAVs to get to their rendezvous positions as fast as possible. Bezier curve is adopted to generate smooth and feasible flight trajectories. During the rendezvous process, UAVs adjust their speed so that they can arrive at the rendezvous positions simultaneously.

Findings

The effectiveness of the proposed method is verified by simulation experiments. The proposed method can successfully and efficiently solve the UAV rendezvous problem.

Originality/value

As far as the authors know, it is the first time Nash bargain is used in the UAV rendezvous problem. The authors modified the Nash bargain method and make it distributed, so that it can be computed easily. The proposed method is much less consuming than ordinary Nash bargain method and ordinary swarm intelligence based methods. It also considers the UAV maneuver constraint, and can be applied online for its fast calculation speed. Simulations demonstrate the effectiveness of the proposed method.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 July 2008

M. Cioffi, P. Di Barba, A. Formisano and R. Martone

This paper seeks to describe an approach to multi‐objective optimization problems (MOOPs) based on game theory (GT) and to provide a comparison with the more standard Pareto…

Abstract

Purpose

This paper seeks to describe an approach to multi‐objective optimization problems (MOOPs) based on game theory (GT) and to provide a comparison with the more standard Pareto approach on a real design problem.

Design/methodology/approach

The GT is first briefly presented, then a possible recasting of MOOPs in terms of GT is described, where players from GT are associated with single objectives and strategies to the choice of degrees of freedom. A comparison with the Pareto approach is performed on the optimized design of a superconducting synchronous generator.

Findings

It was shown that the GT can be applied to the optimized design of real world devices, with results that present a different viewpoint on the problem, yet with device performance comparable with those obtained by standard approaches.

Research limitations/implications

Only the Nash approach to non‐cooperative games has been applied; the conditions for the solution found using GT to belong to the Pareto front have not been fully explored.

Practical implications

Designers and engineers interested in optimal design are presented with a new design technique able to get a balance among conflicting partial objectives, that can also be used to select among different possible designs obtained in other ways (e.g. using the Pareto front approach).

Originality/value

The paper demonstrates the possibility of using GT in the design of real world electromagnetic devices, with reference to the optimal shape design of a high temperature superconducting single‐phase synchronous generator.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 February 2022

Muralidhar Vaman Kamath, Shrilaxmi Prashanth, Mithesh Kumar and Adithya Tantri

The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength…

Abstract

Purpose

The compressive strength of concrete depends on many interdependent parameters; its exact prediction is not that simple because of complex processes involved in strength development. This study aims to predict the compressive strength of normal concrete and high-performance concrete using four datasets.

Design/methodology/approach

In this paper, five established individual Machine Learning (ML) regression models have been compared: Decision Regression Tree, Random Forest Regression, Lasso Regression, Ridge Regression and Multiple-Linear regression. Four datasets were studied, two of which are previous research datasets, and two datasets are from the sophisticated lab using five established individual ML regression models.

Findings

The five statistical indicators like coefficient of determination (R2), mean absolute error, root mean squared error, Nash–Sutcliffe efficiency and mean absolute percentage error have been used to compare the performance of the models. The models are further compared using statistical indicators with previous studies. Lastly, to understand the variable effect of the predictor, the sensitivity and parametric analysis were carried out to find the performance of the variable.

Originality/value

The findings of this paper will allow readers to understand the factors involved in identifying the machine learning models and concrete datasets. In so doing, we hope that this research advances the toolset needed to predict compressive strength.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 2
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 29 July 2014

Egidio D’Amato, Elia Daniele, Lina Mallozzi and Giovanni Petrone

The purpose of this paper is to propose a numerical algorithm able to describe the Stackelberg strategy for a multi level hierarchical three-person game via genetic algorithm (GA…

Abstract

Purpose

The purpose of this paper is to propose a numerical algorithm able to describe the Stackelberg strategy for a multi level hierarchical three-person game via genetic algorithm (GA) evolution process. There is only one player for each hierarchical level: there is an upper level leader (player L0), an intermediate level leader (player L1) who acts as a follower for L0 and as a leader for the lower level player (player F) that is the sole actual follower of this situation.

Design/methodology/approach

The paper presents a computational result via GA approach. The idea of the Stackelberg-GA is to bring together GAs and Stackelberg strategy in order to process a GA to build the Stackelberg strategy. Any player acting as a follower makes his decision at each step of the evolutionary process, playing a simple optimization problem whose solution is supposed to be unique.

Findings

A GA procedure to compute the Stackelberg equilibrium of the three-level hierarchical problem is given. An application to a Authority-Provider-User (APU) model in the context of wireless networks is discussed. The algorithm convergence is illustrated by means of some test cases.

Research limitations/implications

The solution to each level of hierarchy is supposed to be unique.

Originality/value

The paper demonstrates the possibility of using computational procedures based on GAs in hierarchical three level decision problems extending previous results obtained in the classical two level case.

Details

Engineering Computations, vol. 31 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 13 March 2017

Lei Xue, Changyin Sun and Fang Yu

The paper aims to build the connections between game theory and the resource allocation problem with general uncertainty. It proposes modeling the distributed resource allocation…

Abstract

Purpose

The paper aims to build the connections between game theory and the resource allocation problem with general uncertainty. It proposes modeling the distributed resource allocation problem by Bayesian game. During this paper, three basic kinds of uncertainties are discussed. Therefore, the purpose of this paper is to build the connections between game theory and the resource allocation problem with general uncertainty.

Design/methodology/approach

In this paper, the Bayesian game is proposed for modeling the resource allocation problem with uncertainty. The basic game theoretical model contains three parts: agents, utility function, and decision-making process. Therefore, the probabilistic weighted Shapley value (WSV) is applied to design the utility function of the agents. For achieving the Bayesian Nash equilibrium point, the rational learning method is introduced for optimizing the decision-making process of the agents.

Findings

The paper provides empirical insights about how the game theoretical model deals with the resource allocation problem uncertainty. A probabilistic WSV function was proposed to design the utility function of agents. Moreover, the rational learning was used to optimize the decision-making process of agents for achieving Bayesian Nash equilibrium point. By comparing with the models with full information, the simulation results illustrated the effectiveness of the Bayesian game theoretical methods for the resource allocation problem under uncertainty.

Originality/value

This paper designs a Bayesian theoretical model for the resource allocation problem under uncertainty. The relationships between the Bayesian game and the resource allocation problem are discussed.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 April 2018

Lingling Pei, Qin Li and Zhengxin Wang

The purpose of this paper is to propose a new method based on nonlinear least squares (NLS) for solving the parameters of nonlinear grey Bernoulli model (NGBM(1,1)) and to verify…

Abstract

Purpose

The purpose of this paper is to propose a new method based on nonlinear least squares (NLS) for solving the parameters of nonlinear grey Bernoulli model (NGBM(1,1)) and to verify the proposed model using the case of employee demand prediction of high-tech enterprises in China.

Design/methodology/approach

First of all, minimising the square sum of fitting error of grey differential equation of NGBM(1,1) is taken as the optimisation target and the parameters of classic grey model (GM(1,1)) are set as the initial value of parameter vector. Afterwards, the structural parameters and power exponents are solved by using the Gauss-Newton iteration algorithm so as to calculate the parameters of NGBM(1,1) under given rules for ceasing the algorithm. Finally, by taking the employee demand of high-tech enterprises in the state-level high-tech industrial development zone in China as examples, the validity of the new method is verified.

Findings

The results show that the parameter estimation algorithm based on the NLS method can effectively identify the power exponents of NGBM(1,1) and therefore can favourably adapt to the nonlinear fluctuations of sequences. In addition, the algorithm is superior to the GM(1,1) model, grey Verhulst model, and Quadratic-Exponential smoothing algorithm in terms of the simulation and prediction accuracy.

Research limitations/implications

Under the framework of solving parameters based on NLS, various aspects of NGBM(1,1) remain to be further investigated including background value, initial condition and variable structural modelling methods.

Practical implications

The parameter estimation algorithm based on NLS can effectively identify the power exponent of NGBM(1,1) and therefore it can favourably adapt to the nonlinear fluctuation of sequences.

Originality/value

According to the basic principle of NLS, a new method for solving the parameters of NGBM(1,1) is proposed by using the Gauss-Newton iteration algorithm. Moreover, by conducting the modelling case about employees demand in high-tech enterprises in China, the effectiveness and superiority of the new method are verified.

Details

Grey Systems: Theory and Application, vol. 8 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 24 May 2013

Hong Wang, Jyri Leskinen, Dong‐Seop Lee and Jacques Périaux

The purpose of this paper is to investigate an active flow control technique called Shock Control Bump (SCB) for drag reduction using evolutionary algorithms.

Abstract

Purpose

The purpose of this paper is to investigate an active flow control technique called Shock Control Bump (SCB) for drag reduction using evolutionary algorithms.

Design/methodology/approach

A hierarchical genetic algorithm (HGA) consisting of multi‐fidelity models in three hierarchical topological layers is explored to speed up the design optimization process. The top layer consists of a single sub‐population operating on a precise model. On the middle layer, two sub‐populations operate on a model of intermediate accuracy. The bottom layer, consisting of four sub‐populations (two for each middle layer populations), operates on a coarse model. It is well‐known that genetic algorithms (GAs) are different from deterministic optimization tools in mimicking biological evolution based on Darwinian principle. In HGAs process, each population is handled by GA and the best genetic information obtained in the second or third layer migrates to the first or second layer for refinement.

Findings

The method was validated on a real life optimization problem consisting of two‐dimensional SCB design optimization installed on a natural laminar flow airfoil (RAE5243). Numerical results show that HGA is more efficient and achieves more drag reduction compared to a single population based GA.

Originality/value

Although the idea of HGA approach is not new, the novelty of this paper is to combine it with mesh/meshless methods and multi‐fidelity flow analyzers. To take the full benefit of using hierarchical topology, the following conditions are implemented: the first layer uses a precise meshless Euler solver with fine cloud of points, the second layer uses a hybrid mesh/meshless Euler solver with intermediate mesh/clouds of points, the third layer uses a less fine mesh with Euler solver to explore efficiently the search space with large mutation span.

Details

Engineering Computations, vol. 30 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 March 2023

Wen Zhang, Guohui Chen and Qiguo Gong

This paper aims to systematically understand the development of rapid setup, quantitatively analyze the landscape and reveal new trends and challenges.

Abstract

Purpose

This paper aims to systematically understand the development of rapid setup, quantitatively analyze the landscape and reveal new trends and challenges.

Design/methodology/approach

Based on 192 literature studies (1987–2021) collected from Scopus and Google Scholar, the papers are classified by: publication time and source; research type and data analysis of papers; pattern of authorship and country; sector-wise focus of the paper; improvement method used in the setup. And CiteSpace is used to analyze the cooccurrence and timeline of keywords.

Findings

There has been substantial progress in the past 35 years, including the rapid growth in the number of papers, the expansion in different disciplines, the participation of developing countries, the application in the service industry and the significant impact of setup on cost. And there are still some deficiencies.

Research limitations/implications

There is concern that Google Scholar lacks the quality control needed for its use as a bibliometric tool. Future work is encouraged to conduct an in-depth discussion on high-quality papers.

Practical implications

In small batch production, rapid setup is increasingly essential. Clarifying the research focus and main improvement methods is of great significance for enterprises to meet the changing market needs.

Originality/value

To the best of the authors’ knowledge, this study is the first literature review on rapid setup. It is decided to consider a detailed set of data for better introspection and trace the history reflections and the research future in setup time.

Details

International Journal of Lean Six Sigma, vol. 14 no. 7
Type: Research Article
ISSN: 2040-4166

Keywords

1 – 10 of 556