Search results

1 – 10 of 158
Article
Publication date: 12 June 2017

Slawomir Koziel and Adrian Bekasiewicz

This paper aims to assess control parameter setup and its effect on computational cost and performance of deterministic procedures for multi-objective design optimization of…

Abstract

Purpose

This paper aims to assess control parameter setup and its effect on computational cost and performance of deterministic procedures for multi-objective design optimization of expensive simulation models of antenna structures.

Design/methodology/approach

A deterministic algorithm for cost-efficient multi-objective optimization of antenna structures has been assessed. The algorithm constructs a patch connecting extreme Pareto-optimal designs (obtained by means of separate single-objective optimization runs). Its performance (both cost- and quality-wise) depends on the dimensions of the so-called patch, an elementary region being relocated in the course of the optimization process. The cost/performance trade-offs are studied using two examples of ultra-wideband antenna structures and the optimization results are compared to draw conclusions concerning the algorithm robustness and determine the most advantageous control parameter setups.

Findings

The obtained results indicate that the investigated algorithm is very robust, i.e. its performance is weakly dependent on the control parameters setup. At the same time, it is found that the most suitable setups are those that ensure low computational cost, specifically non-uniform ones generated on the basis of sensitivity analysis.

Research limitations/implications

The study provides recommendations for control parameter setup of deterministic multi-objective optimization procedure for computationally efficient design of antenna structures. This is the first study of this kind for this particular design procedure, which confirms its robustness and determines the most suitable arrangement of the control parameters. Consequently, the presented results permit full automation of the surrogate-assisted multi-objective antenna optimization process while ensuring its lowest possible computational cost.

Originality/value

The work is the first comprehensive validation of the sequential domain patching algorithm under various scenarios of its control parameter setup. The considered design procedure along with the recommended parameter arrangement is a robust and computationally efficient tool for fully automated multi-objective optimization of expensive simulation models of contemporary antenna structures.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 9 August 2019

Anand Amrit and Leifur Leifsson

The purpose of this work is to apply and compare surrogate-assisted and multi-fidelity, multi-objective optimization (MOO) algorithms to simulation-based aerodynamic design…

Abstract

Purpose

The purpose of this work is to apply and compare surrogate-assisted and multi-fidelity, multi-objective optimization (MOO) algorithms to simulation-based aerodynamic design exploration.

Design/methodology/approach

The three algorithms for multi-objective aerodynamic optimization compared in this work are the combination of evolutionary algorithms, design space reduction and surrogate models, the multi-fidelity point-by-point Pareto set identification and the multi-fidelity sequential domain patching (SDP) Pareto set identification. The algorithms are applied to three cases, namely, an analytical test case, the design of transonic airfoil shapes and the design of subsonic wing shapes, and are evaluated based on the resulting best possible trade-offs and the computational overhead.

Findings

The results show that all three algorithms yield comparable best possible trade-offs for all the test cases. For the aerodynamic test cases, the multi-fidelity Pareto set identification algorithms outperform the surrogate-assisted evolutionary algorithm by up to 50 per cent in terms of cost. Furthermore, the point-by-point algorithm is around 27 per cent more efficient than the SDP algorithm.

Originality/value

The novelty of this work includes the first applications of the SDP algorithm to multi-fidelity aerodynamic design exploration, the first comparison of these multi-fidelity MOO algorithms and new results of a complex simulation-based multi-objective aerodynamic design of subsonic wing shapes involving two conflicting criteria, several nonlinear constraints and over ten design variables.

Article
Publication date: 18 April 2017

Slawomir Koziel and Adrian Bekasiewicz

This paper aims to investigate deterministic strategies for low-cost multi-objective design optimization of compact microwave structures, specifically, impedance matching…

Abstract

Purpose

This paper aims to investigate deterministic strategies for low-cost multi-objective design optimization of compact microwave structures, specifically, impedance matching transformers. The considered methods involve surrogate modeling techniques and variable-fidelity electromagnetic (EM) simulations. In contrary to majority of conventional approaches, they do not rely on population-based metaheuristics, which permit lowering the design cost and improve reliability.

Design/methodology/approach

There are two algorithmic frameworks presented, both fully deterministic. The first algorithm involves creating a path covering the Pareto front and arranged as a sequence of patches relocated in the course of optimization. Response correction techniques are used to find the Pareto front representation at the high-fidelity EM simulation level. The second algorithm exploits Pareto front exploration where subsequent Pareto-optimal designs are obtained by moving along the front by means of solving appropriately defined local constrained optimization problems. Numerical case studies are provided demonstrating feasibility of solving real-world problems involving expensive EM-simulation models of impedance transformer structures.

Findings

It is possible, by means of combining surrogate modeling techniques and constrained local optimization, to identify the set of alternative designs representing Pareto-optimal solutions, in a realistic time frame corresponding to a few dozen of high-fidelity EM simulations of the respective structures. Multi-objective optimization for the considered class of structures can be realized using deterministic approaches without defaulting to evolutionary methods.

Research limitations/implications

The present study can be considered a step toward further studies on expedited optimization of computationally expensive simulation models for miniaturized microwave components.

Originality/value

The proposed algorithmic solutions proved useful for expedited multi-objective design optimization of miniaturized microwave structures. The problem is extremely challenging when using conventional methods, in particular evolutionary algorithms. To the authors’ knowledge, this is one of the first attempts to investigate deterministic surrogate-assisted multi-objective optimization of compact components at the EM-simulation level.

Article
Publication date: 1 January 2006

Bongsug Chae and Giovan Francesco Lanzara

Seeks to raise the question of why large‐scale technochange is difficult and often failure‐prone and to attempt to answer this question by viewing technochange as an instance of…

1591

Abstract

Purpose

Seeks to raise the question of why large‐scale technochange is difficult and often failure‐prone and to attempt to answer this question by viewing technochange as an instance of institutional change and design in which self‐destructive mechanisms are inherently embedded.

Design/methodology/approach

In order to explore the complex institutional dynamics of large‐scale technochange the paper uses the exploration/exploitation framework originally developed by March and extended by Lanzara to the study of institution‐building processes in the political domain. The argument is that problems in implementing large‐scale technochange stem from learning dilemmas in the inter‐temporal and inter‐group allocation of material and cognitive resources. The paper uses a case of large‐scale technology in a major US university system to illustrate the institutional perspective on technochange.

Findings

It is argued and illustrated that the development and redesign of large‐scale information systems involve both the exploration of alternative institutional arrangements and the exploitation of pre‐existing ones, such that a delicate balance must be struck to overcome incoherences and dilemmas between the two activities.

Research limitations/implications

The proposed framework to understand large‐scale technochange is not examined empirically. The illustration of the framework relies on a single large‐scale system project of a non‐profit organization in the USA. Further empirical work and comparative research on multiple cases are needed.

Practical implications

The paper discusses some sources of the failures of large‐scale technochange and offers three interrelated mechanisms to counteract such failure sources, namely focal points, increasing returns, and bricolage. These counteracting mechanisms may help organizations to effectively deal with the dilemmas of exploration and exploitation in technochange.

Originality/value

This paper fills the gap in understanding the nature of large‐scale technochange, providing an explanation of why it is difficult and failure‐prone and offering some modest proposals for intervention in large‐scale system projects.

Details

Information Technology & People, vol. 19 no. 1
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 20 June 2024

Hugo Gobato Souto and Amir Moradi

This study aims to critically evaluate the competitiveness of Transformer-based models in financial forecasting, specifically in the context of stock realized volatility…

Abstract

Purpose

This study aims to critically evaluate the competitiveness of Transformer-based models in financial forecasting, specifically in the context of stock realized volatility forecasting. It seeks to challenge and extend upon the assertions of Zeng et al. (2023) regarding the purported limitations of these models in handling temporal information in financial time series.

Design/methodology/approach

Employing a robust methodological framework, the study systematically compares a range of Transformer models, including first-generation and advanced iterations like Informer, Autoformer, and PatchTST, against benchmark models (HAR, NBEATSx, NHITS, and TimesNet). The evaluation encompasses 80 different stocks, four error metrics, four statistical tests, and three robustness tests designed to reflect diverse market conditions and data availability scenarios.

Findings

The research uncovers that while first-generation Transformer models, like TFT, underperform in financial forecasting, second-generation models like Informer, Autoformer, and PatchTST demonstrate remarkable efficacy, especially in scenarios characterized by limited historical data and market volatility. The study also highlights the nuanced performance of these models across different forecasting horizons and error metrics, showcasing their potential as robust tools in financial forecasting, which contradicts the findings of Zeng et al. (2023)

Originality/value

This paper contributes to the financial forecasting literature by providing a comprehensive analysis of the applicability of Transformer-based models in this domain. It offers new insights into the capabilities of these models, especially their adaptability to different market conditions and forecasting requirements, challenging the existing skepticism created by Zeng et al. (2023) about their utility in financial forecasting.

Details

China Finance Review International, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 1 July 2014

Nguyen Dang Manh, Anton Evgrafov, Jens Gravesen and Domenico Lahaye

The waste recycling industry increasingly relies on magnetic density separators. These devices generate an upward magnetic force in ferro-fluids allowing to separate the immersed…

Abstract

Purpose

The waste recycling industry increasingly relies on magnetic density separators. These devices generate an upward magnetic force in ferro-fluids allowing to separate the immersed particles according to their mass density. Recently, a new separator design has been proposed that significantly reduces the required amount of permanent magnet material. The purpose of this paper is to alleviate the undesired end-effects in this design by altering the shape of the ferromagnetic covers of the individual poles.

Design/methodology/approach

The paper represents the shape of the ferromagnetic pole covers with B-splines and defines a cost functional that measures the non-uniformity of the magnetic field in an area above the poles. The authors apply an iso-geometric shape optimization procedure, which allows us to accurately represent, analyze and optimize the geometry using only a few design variables. The design problem is regularized by imposing constraints that enforce the convexity of the pole cover shapes and is solved by a non-linear optimization procedure. The paper validates the implementation of the algorithm using a simplified variant of the design problem with a known analytical solution. The algorithm is subsequently applied to the problem posed.

Findings

The shape optimization attains its target and yields pole cover shapes that give rise to a magnetic field that is uniform over a larger domain.

Research limitations/implications

This increased magnetic field uniformity is obtained at the cost of a pole cover shape that differs per pole. This limitation has negligible impact on the manufacturing of the separator. The new pole cover shapes therefore lead to improved performance of the density separation.

Practical implications

Due to the larger uniformity the generated field, these shapes should enable larger amounts of waste to be processed than the previous design.

Originality/value

This paper treats the shapes optimization of magnetic density separators systematically and presents new shapes for the ferromagnetic poles covers.

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 May 2001

N.P. Weatherill, O. Hassan, K. Morgan, J.W. Jones and B. Larwood

A general philosophy is presented in which all the modules within the computational cycle are parallelised and executed on parallel computer hardware, thereby avoiding the…

Abstract

A general philosophy is presented in which all the modules within the computational cycle are parallelised and executed on parallel computer hardware, thereby avoiding the creation of computational bottlenecks. In particular, unstructured mesh generation with adaptation, computational fluid dynamics and computational electromagnetic solvers and the visualisation of grid and solution data are all performed in parallel. In addition, all these modules are embedded within a parallel problem solving environment. This paper will provide an overview of these developments. In particular, details of the parallel mesh generator, which has been used to generate meshes in excess of 100 million elements, will be given. A brief overview will be presented of the approach used to parallelise the solvers and how large data sets are interrogated and visualised on distributed computer platforms. Details of the parallel adaptation algorithm will be presented. These parallel component modules are linked using CORBA communication to provide an integrated parallel approach for large scale simulations. Several examples are given of the approach applied to the simulation of large aerospace calculations in the field of aerodynamics and electromagnetics.

Details

Engineering Computations, vol. 18 no. 3/4
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 23 October 2023

Glenn W. Harrison and Don Ross

Behavioral economics poses a challenge for the welfare evaluation of choices, particularly those that involve risk. It demands that we recognize that the descriptive account of…

Abstract

Behavioral economics poses a challenge for the welfare evaluation of choices, particularly those that involve risk. It demands that we recognize that the descriptive account of behavior toward those choices might not be the ones we were all taught, and still teach, and that subjective risk perceptions might not accord with expert assessments of probabilities. In addition to these challenges, we are faced with the need to jettison naive notions of revealed preferences, according to which every choice by a subject expresses her objective function, as behavioral evidence forces us to confront pervasive inconsistencies and noise in a typical individual’s choice data. A principled account of errant choice must be built into models used for identification and estimation. These challenges demand close attention to the methodological claims often used to justify policy interventions. They also require, we argue, closer attention by economists to relevant contributions from cognitive science. We propose that a quantitative application of the “intentional stance” of Dennett provides a coherent, attractive and general approach to behavioral welfare economics.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Article
Publication date: 6 November 2023

Daniel E.S. Rodrigues, Jorge Belinha and Renato Natal Jorge

Fused Filament Fabrication (FFF) is an extrusion-based manufacturing process using fused thermoplastics. Despite its low cost, the FFF is not extensively used in high-value…

Abstract

Purpose

Fused Filament Fabrication (FFF) is an extrusion-based manufacturing process using fused thermoplastics. Despite its low cost, the FFF is not extensively used in high-value industrial sectors mainly due to parts' anisotropy (related to the deposition strategy) and residual stresses (caused by successive heating cycles). Thus, this study aims to investigate the process improvement and the optimization of the printed parts.

Design/methodology/approach

In this work, a meshless technique – the Radial Point Interpolation Method (RPIM) – is used to numerically simulate the viscoplastic extrusion process – the initial phase of the FFF. Unlike the FEM, in meshless methods, there is no pre-established relationship between the nodes so the nodal mesh will not face mesh distortions and the discretization can easily be modified by adding or removing nodes from the initial nodal mesh. The accuracy of the obtained results highlights the importance of using meshless techniques in this field.

Findings

Meshless methods show particular relevance in this topic since the nodes can be distributed to match the layer-by-layer growing condition of the printing process.

Originality/value

Using the flow formulation combined with the heat transfer formulation presented here for the first time within an in-house RPIM code, an algorithm is proposed, implemented and validated for benchmark examples.

Article
Publication date: 13 June 2016

Zahur Ullah, Will Coombs and C Augarde

A variety of meshless methods have been developed in the last 20 years with an intention to solve practical engineering problems, but are limited to small academic problems due to…

Abstract

Purpose

A variety of meshless methods have been developed in the last 20 years with an intention to solve practical engineering problems, but are limited to small academic problems due to associated high computational cost as compared to the standard finite element methods (FEM). The purpose of this paper is to develop an efficient and accurate algorithms based on meshless methods for the solution of problems involving both material and geometrical nonlinearities.

Design/methodology/approach

A parallel two-dimensional linear elastic computer code is presented for a maximum entropy basis functions based meshless method. The two-dimensional algorithm is subsequently extended to three-dimensional adaptive nonlinear and three-dimensional parallel nonlinear adaptively coupled finite element, meshless method cases. The Prandtl-Reuss constitutive model is used to model elasto-plasticity and total Lagrangian formulations are used to model finite deformation. Furthermore, Zienkiewicz and Zhu and Chung and Belytschko error estimation procedure are used in the FE and meshless regions of the problem domain, respectively. The message passing interface library and open-source software packages, METIS and MUltifrontal Massively Parallel Solver are used for the high performance computation.

Findings

Numerical examples are given to demonstrate the correct implementation and performance of the parallel algorithms. The agreement between the numerical and analytical results in the case of linear elastic example is excellent. For the nonlinear problems load-displacement curve are compared with the reference FEM and found in a very good agreement. As compared to the FEM, no volumetric locking was observed in the case of meshless method. Furthermore, it is shown that increasing the number of processors up to a given number improve the performance of parallel algorithms in term of simulation time, speedup and efficiency.

Originality/value

Problems involving both material and geometrical nonlinearities are of practical importance in many engineering applications, e.g. geomechanics, metal forming and biomechanics. A family of parallel algorithms has been developed in this paper for these problems using adaptively coupled finite element, meshless method (based on maximum entropy basis functions) for distributed memory computer architectures.

1 – 10 of 158