Search results

1 – 10 of 437
Article
Publication date: 13 August 2019

Xiaosong Du and Leifur Leifsson

Model-assisted probability of detection (MAPOD) is an important approach used as part of assessing the reliability of nondestructive testing systems. The purpose of this paper is…

Abstract

Purpose

Model-assisted probability of detection (MAPOD) is an important approach used as part of assessing the reliability of nondestructive testing systems. The purpose of this paper is to apply the polynomial chaos-based Kriging (PCK) metamodeling method to MAPOD for the first time to enable efficient uncertainty propagation, which is currently a major bottleneck when using accurate physics-based models.

Design/methodology/approach

In this paper, the state-of-the-art Kriging, polynomial chaos expansions (PCE) and PCK are applied to “a^ vs a”-based MAPOD of ultrasonic testing (UT) benchmark problems. In particular, Kriging interpolation matches the observations well, while PCE is capable of capturing the global trend accurately. The proposed UP approach for MAPOD using PCK adopts the PCE bases as the trend function of the universal Kriging model, aiming at combining advantages of both metamodels.

Findings

To reach a pre-set accuracy threshold, the PCK method requires 50 per cent fewer training points than the PCE method, and around one order of magnitude fewer than Kriging for the test cases considered. The relative differences on the key MAPOD metrics compared with those from the physics-based models are controlled within 1 per cent.

Originality/value

The contributions of this work are the first application of PCK metamodel for MAPOD analysis, the first comparison between PCK with the current state-of-the-art metamodels for MAPOD and new MAPOD results for the UT benchmark cases.

Details

Engineering Computations, vol. 37 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 9 May 2008

D. Voyer, F. Musy, L. Nicolas and R. Perrussel

The aim is to apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters.

Abstract

Purpose

The aim is to apply probabilistic approaches to electromagnetic numerical dosimetry problems in order to take into account the variability of the input parameters.

Design/methodology/approach

A classic finite element method is coupled with probabilistic methods. These probabilistic methods are based on the expansion of the random parameters in two different ways: a spectral expansion and a nodal expansion.

Findings

The computation of the mean and the variance on a simple scattering problem shows that only a few hundreds calculations are required when applying these methods while the Monte Carlo method uses several thousands of samples in order to obtain a comparable accuracy.

Originality/value

The number of calculations is reduced using several techniques: a regression technique, sparse grids computed from Smolyak algorithm or a suited coordinate system.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 23 May 2023

Shiyuan Yang, Debiao Meng, Hongtao Wang, Zhipeng Chen and Bing Xu

This study conducts a comparative study on the performance of reliability assessment methods based on adaptive surrogate models to accurately assess the reliability of automobile…

Abstract

Purpose

This study conducts a comparative study on the performance of reliability assessment methods based on adaptive surrogate models to accurately assess the reliability of automobile components, which is critical to the safe operation of vehicles.

Design/methodology/approach

In this study, different adaptive learning strategies and surrogate models are combined to study their performance in reliability assessment of automobile components.

Findings

By comparing the reliability evaluation problems of four automobile components, the Kriging model and Polynomial Chaos-Kriging (PCK) have better robustness. Considering the trade-off between accuracy and efficiency, PCK is optimal. The Constrained Min-Max (CMM) learning function only depends on sample information, so it is suitable for most surrogate models. In the four calculation examples, the performance of the combination of CMM and PCK is relatively good. Thus, it is recommended for reliability evaluation problems of automobile components.

Originality/value

Although a lot of research has been conducted on adaptive surrogate-model-based reliability evaluation method, there are still relatively few studies on the comprehensive application of this method to the reliability evaluation of automobile component. In this study, different adaptive learning strategies and surrogate models are combined to study their performance in reliability assessment of automobile components. Specially, a superior surrogate-model-based reliability evaluation method combination is illustrated in this study, which is instructive for adaptive surrogate-model-based reliability analysis in the reliability evaluation problem of automobile components.

Details

International Journal of Structural Integrity, vol. 14 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 24 February 2012

Feng Wang, Chenfeng Li, Jianwen Feng, Song Cen and D.R.J. Owen

The purpose of this paper is to present a novel gradient‐based iterative algorithm for the joint diagonalization of a set of real symmetric matrices. The approximate joint…

Abstract

Purpose

The purpose of this paper is to present a novel gradient‐based iterative algorithm for the joint diagonalization of a set of real symmetric matrices. The approximate joint diagonalization of a set of matrices is an important tool for solving stochastic linear equations. As an application, reliability analysis of structures by using the stochastic finite element analysis based on the joint diagonalization approach is also introduced in this paper, and it provides useful references to practical engineers.

Design/methodology/approach

By starting with a least squares (LS) criterion, the authors obtain a classical nonlinear cost‐function and transfer the joint diagonalization problem into a least squares like minimization problem. A gradient method for minimizing such a cost function is derived and tested against other techniques in engineering applications.

Findings

A novel approach is presented for joint diagonalization for a set of real symmetric matrices. The new algorithm works on the numerical gradient base, and solves the problem with iterations. Demonstrated by examples, the new algorithm shows the merits of simplicity, effectiveness, and computational efficiency.

Originality/value

A novel algorithm for joint diagonalization of real symmetric matrices is presented in this paper. The new algorithm is based on the least squares criterion, and it iteratively searches for the optimal transformation matrix based on the gradient of the cost function, which can be computed in a closed form. Numerical examples show that the new algorithm is efficient and robust. The new algorithm is applied in conjunction with stochastic finite element methods, and very promising results are observed which match very well with the Monte Carlo method, but with higher computational efficiency. The new method is also tested in the context of structural reliability analysis. The reliability index obtained with the joint diagonalization approach is compared with the conventional Hasofer Lind algorithm, and again good agreement is achieved.

Article
Publication date: 17 July 2009

Emmanuel Blanchard, Adrian Sandu and Corina Sandu

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained…

Abstract

Purpose

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained using the polynomial chaos theory for propagating uncertainties through system dynamics. The new method has the advantage of being able to deal with large parametric uncertainties, non‐Gaussian probability densities and nonlinear dynamics.

Design/methodology/approach

The maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. Direct stochastic collocation is used as a less computationally expensive alternative to the traditional Galerkin approach to propagate the uncertainties through the system in the polynomial chaos framework.

Findings

The new approach is explained and is applied to very simple mechanical systems in order to illustrate how the Bayesian cost function can be affected by the noise level in the measurements, by undersampling, non‐identifiablily of the system, non‐observability and by excitation signals that are not rich enough. When the system is non‐identifiable and an a priori knowledge of the parameter uncertainties is available, regularization techniques can still yield most likely values among the possible combinations of uncertain parameters resulting in the same time responses than the ones observed.

Originality/value

The polynomial chaos method has been shown to be considerably more efficient than Monte Carlo in the simulation of systems with a small number of uncertain parameters. This is believed to be the first time the polynomial chaos theory has been applied to Bayesian estimation.

Details

Engineering Computations, vol. 26 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 July 2013

Peter Offermann and Kay Hameyer

Due to the production process, arc segment magnets with radial magnetization for surface‐mounted permanent‐magnet synchronous machines (PMSM) can exhibit a deviation from the…

Abstract

Purpose

Due to the production process, arc segment magnets with radial magnetization for surface‐mounted permanent‐magnet synchronous machines (PMSM) can exhibit a deviation from the intended ideal, radial directed magnetization. In such cases, the resulting air gap field may show spatial variations in angle and absolute value of the flux‐density. For this purpose, this paper aims to create and evaluate a stochastic magnet model.

Design/methodology/approach

In this paper, a polynomial chaos meta‐model approach, extracted from a finite element model, is compared to a direct sampling approach. Both approaches are evaluated using Monte‐Carlo simulation for the calculation of the flux‐density above one sole magnet surface.

Findings

The used approach allows representing the flux‐density's variations in terms of the magnet's stochastic input variations, which is not possible with pure Monte‐Carlo simulation. Furthermore, the resulting polynomialchaos meta‐model can be used to accelerate the calculation of error probabilities for a given limit state function by a factor of ten.

Research limitations/implications

Due to epistemic uncertainty magnet variations are assumed to be purely Gaussian distributed.

Originality/value

The comparison of both approaches verifies the assumption that the polynomial chaos meta‐model of the magnets will be applicable for a complete machine simulation.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 24 August 2019

Yangtian Li, Haibin Li and Guangmei Wei

To present the models with many model parameters by polynomial chaos expansion (PCE), and improve the accuracy, this paper aims to present dimension-adaptive algorithm-based PCE…

Abstract

Purpose

To present the models with many model parameters by polynomial chaos expansion (PCE), and improve the accuracy, this paper aims to present dimension-adaptive algorithm-based PCE technique and verify the feasibility of the proposed method through taking solid rocket motor ignition under low temperature as an example.

Design/methodology/approach

The main approaches of this work are as follows: presenting a two-step dimension-adaptive algorithm; through computing the PCE coefficients using dimension-adaptive algorithm, improving the accuracy of PCE surrogate model obtained; and applying the proposed method to uncertainty quantification (UQ) of solid rocket motor ignition under low temperature to verify the feasibility of the proposed method.

Findings

The result indicates that by means of comparing with some conventional non-invasive method, the proposed method is able to raise the computational accuracy significantly on condition of meeting the efficiency requirement.

Originality/value

This paper proposes an approach in which the optimal non-uniform grid that can avoid the issue of overfitting or underfitting is obtained.

Article
Publication date: 21 March 2019

Huan Zhao and Zhenghong Gao

The high probability of the occurrence of separation bubbles or shocks and early transition to turbulence on surfaces of airfoil makes it very difficult to design high-lift and…

Abstract

Purpose

The high probability of the occurrence of separation bubbles or shocks and early transition to turbulence on surfaces of airfoil makes it very difficult to design high-lift and high-speed Natural-Laminar-Flow (NLF) airfoil for high-altitude long-endurance unmanned air vehicles. To resolve this issue, a framework of uncertainty-based design optimization (UBDO) is developed based on an adjusted polynomial chaos expansion (PCE) method.

Design/methodology/approach

The γ ̄Re-θt transition model combined with the shear stress transport k-ω turbulence model is used to predict the laminar-turbulent transition. The particle swarm optimization algorithm and PCE are integrated to search for the optimal NLF airfoil. Using proposed UBDO framework, the aforementioned problem has been regularized to achieve the optimal airfoil with a tradeoff of aerodynamic performances under fully turbulent and free transition conditions. The tradeoff is to make sure its good performance when early transition to turbulence on surfaces of NLF airfoil happens.

Findings

The results indicate that UBDO of NLF airfoil considering Mach number and lift coefficient uncertainty under free transition condition shows a significant deterioration when complicated flight conditions lead to early transition to turbulence. Meanwhile, UBDO of NLF airfoil with a tradeoff of performances under both fully turbulent and free transition conditions holds robust and reliable aerodynamic performance under complicated flight conditions.

Originality/value

In this work, the authors build an effective uncertainty-based design framework based on an adjusted PCE method and apply the framework to design two high-performance NLF airfoils. One of the two NLF airfoils considers Mach number and lift coefficient uncertainty under free transition condition, and the other considers uncertainties both under fully turbulent and free transition conditions. The results show that robust design of NLF airfoil should simultaneously consider Mach number, lift coefficient (angle of attack) and transition location uncertainty.

Article
Publication date: 12 April 2022

Yao Pei, Lionel Pichon, Mohamed Bensetti and Yann Le Bihan

The purpose of this study is to decrease the computation time that the large number of simulations involved in a parametric sweep when the model is in a three-dimensional…

Abstract

Purpose

The purpose of this study is to decrease the computation time that the large number of simulations involved in a parametric sweep when the model is in a three-dimensional environment.

Design/methodology/approach

In this paper, a new methodology combining the PCE and a controlled, elitist genetic algorithm is proposed to design IPT systems. The relationship between the quantities of interest (mutual inductance and ferrite volume) and structural parameters (ferrite dimensions) is expressed by a PCE metamodel. Then, two objective functions corresponding to mutual inductance and ferrite volume are defined. These are combined together to obtain optimal parameters with a trade-off between these outputs.

Findings

According to the number of individuals and the generations defined in the optimization algorithm in this paper, it needs to calculate 20,000 times in a 3D environment, which is quite time-consuming. But for PCE metamodel of mutual inductance M, it requires at least 100 times of calculations. Afterward, the evaluation of M based on the PCE metamodel requires 1 or 2 s. So compared to a conventional optimization based on the 3D model, it is easier to get optimized results with this approach and it saves a lot of computation time.

Originality/value

The multiobjective optimization based on PCEs could be helpful to perform the optimization when considering the system in a realistic 3D environment involving many parameters with low computation time.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 437