Search results

1 – 10 of 23
Article
Publication date: 1 October 2006

C.F. Li, Y.T. Feng, D.R.J. Owen and I.M. Davies

To provide an explicit representation for wide‐sense stationary stochastic fields which can be used in stochastic finite element modelling to describe random material properties.

Abstract

Purpose

To provide an explicit representation for wide‐sense stationary stochastic fields which can be used in stochastic finite element modelling to describe random material properties.

Design/methodology/approach

This method represents wide‐sense stationary stochastic fields in terms of multiple Fourier series and a vector of mutually uncorrelated random variables, which are obtained by minimizing the mean‐squared error of a characteristic equation and solving a standard algebraic eigenvalue problem. The result can be treated as a semi‐analytic solution of the KarhunenLoève expansion.

Findings

According to the KarhunenLoève theorem, a second‐order stochastic field can be decomposed into a random part and a deterministic part. Owing to the harmonic essence of wide‐sense stationary stochastic fields, the decomposition can be effectively obtained with the assistance of multiple Fourier series.

Practical implications

The proposed explicit representation of wide‐sense stationary stochastic fields is accurate, efficient and independent of the real shape of the random structure in consideration. Therefore, it can be readily applied in a variety of stochastic finite element formulations to describe random material properties.

Originality/value

This paper discloses the connection between the spectral representation theory of wide‐sense stationary stochastic fields and the KarhunenLoève theorem of general second‐order stochastic fields, and obtains a Fourier‐KarhunenLoève representation for the former stochastic fields.

Details

Engineering Computations, vol. 23 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 2004

Marissa Condon and Rossen Ivanov

This paper presents the application in circuit simulation of a method for model reduction of nonlinear systems that has recently been developed for chemical systems. The technique…

Abstract

This paper presents the application in circuit simulation of a method for model reduction of nonlinear systems that has recently been developed for chemical systems. The technique is an extension of the well‐known balanced truncation method that has been applied extensively in the reduction of linear systems. The technique involves the formation of controllability and observability gramians either by simulated results or by measurement data. The empirical gramians are subsequently employed to determine a subspace of the full state‐space that contains the most significant dynamics of the system. A Galerkin projection is used to project the system onto the subspace to form a lower‐dimensional nonlinear model. The method is applied to a nonlinear resistor network which is a standard example for exemplifying the effectiveness of a nonlinear reduction strategy.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 23 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 3 July 2017

Radoslav Jankoski, Ulrich Römer and Sebastian Schöps

The purpose of this paper is to present a computationally efficient approach for the stochastic modeling of an inhomogeneous reluctivity of magnetic materials. These materials can…

Abstract

Purpose

The purpose of this paper is to present a computationally efficient approach for the stochastic modeling of an inhomogeneous reluctivity of magnetic materials. These materials can be part of electrical machines such as a single-phase transformer (a benchmark example that is considered in this paper). The approach is based on the KarhunenLoève expansion (KLE). The stochastic model is further used to study the statistics of the self-inductance of the primary coil as a quantity of interest (QoI).

Design/methodology/approach

The computation of the KLE requires solving a generalized eigenvalue problem with dense matrices. The eigenvalues and the eigenfunction are computed by using the Lanczos method that needs only matrix vector multiplications. The complexity of performing matrix vector multiplications with dense matrices is reduced by using hierarchical matrices.

Findings

The suggested approach is used to study the impact of the spatial variability in the magnetic reluctivity on the QoI. The statistics of this parameter are influenced by the correlation lengths of the random reluctivity. Both, the mean value and the standard deviation increase as the correlation length of the random reluctivity increases.

Originality/value

The KLE, computed by using hierarchical matrices, is used for uncertainty quantification of low frequency electrical machines as a computationally efficient approach in terms of memory requirement, as well as computation time.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 36 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Book part
Publication date: 24 April 2023

Uwe Hassler and Mehdi Hosseinkouchack

The authors propose a family of tests for stationarity against a local unit root. It builds on the KarhunenLoève (KL) expansions of the limiting CUSUM process under the null…

Abstract

The authors propose a family of tests for stationarity against a local unit root. It builds on the KarhunenLoève (KL) expansions of the limiting CUSUM process under the null hypothesis and a local alternative. The variance ratio type statistic VRq is a ratio of quadratic forms of q weighted Gaussian sums such that the nuisance long-run variance cancels asymptotically without having to be estimated. Asymptotic critical values and local power functions can be calculated by standard numerical means, and power grows with q. However, Monte Carlo experiments show that q may not be too large in finite samples to obtain tests with correct size under the null. Balancing size and power results in a superior performance compared to the classic KPSS test.

Details

Essays in Honor of Joon Y. Park: Econometric Theory
Type: Book
ISBN: 978-1-83753-209-4

Keywords

Article
Publication date: 31 July 2019

S. Saha Ray and S. Behera

A novel technique based on Bernoulli wavelets has been proposed to solve two-dimensional Fredholm integral equation of second kind. Bernoulli wavelets have been created by…

Abstract

Purpose

A novel technique based on Bernoulli wavelets has been proposed to solve two-dimensional Fredholm integral equation of second kind. Bernoulli wavelets have been created by dilation and translation of Bernoulli polynomials. This paper aims to introduce properties of Bernoulli wavelets and Bernoulli polynomials.

Design/methodology/approach

To solve the two-dimensional Fredholm integral equation of second kind, the proposed method has been used to transform the integral equation into a system of algebraic equations.

Findings

Numerical experiments shows that the proposed two-dimensional wavelets technique can give high-accurate solutions and good convergence rate.

Originality/value

The efficiency of newly developed two-dimensional wavelets technique has been validated by different illustrative numerical examples to solve two-dimensional Fredholm integral equations.

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 2 January 2009

M. Hautefeuille, S. Melnyk, J.B. Colliat and A. Ibrahimbegovic

The purpose of this paper is to discuss the inelastic behavior of heterogeneous structures within the framework of finite element modelling, by taking into the related…

Abstract

Purpose

The purpose of this paper is to discuss the inelastic behavior of heterogeneous structures within the framework of finite element modelling, by taking into the related probabilistic aspects of heterogeneities.

Design/methodology/approach

The paper shows how to construct the structured FE mesh representation for the failure modelling for such structures, by using a building‐block of a constant stress element which can contain two different phases and phase interface. All the modifications which are needed to enforce for such an element in order to account for inelastic behavior in each phase and the corresponding inelastic failure modes at the phase interface are presented.

Findings

It is demonstrated by numerical examples that the proposed structured FE mesh approach is much more efficient from the non‐structured mesh representation. This feature is of special interest for probabilistic analysis, where a large amount of computation is needed in order to provide the corresponding statistics. One such case of probabilistic analysis is considered in this work where the geometry of the phase interface is obtained as the result of the Gibbs random process.

Originality/value

The paper confirms that one can make the most appropriate interpretation of the heterogeneous structure properties by taking into account the fine details of the internal structure, along with the related probabilistic aspects with the proper source of randomness, such as the one addressed herein in terms of porosity.

Details

Engineering Computations, vol. 26 no. 1/2
Type: Research Article
ISSN: 0264-4401

Keywords

Content available
Article
Publication date: 1 June 2003

Jon Rigelsford

486

Abstract

Details

Sensor Review, vol. 23 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 12 June 2017

Khaoula Chikhaoui, Noureddine Bouhaddi, Najib Kacem, Mohamed Guedri and Mohamed Soula

The purpose of this paper is to develop robust metamodels, which allow propagating parametric uncertainties, in the presence of localized nonlinearities, with reduced cost and…

Abstract

Purpose

The purpose of this paper is to develop robust metamodels, which allow propagating parametric uncertainties, in the presence of localized nonlinearities, with reduced cost and without significant loss of accuracy.

Design/methodology/approach

The proposed metamodels combine the generalized polynomial chaos expansion (gPCE) for the uncertainty propagation and reduced order models (ROMs). Based on the computation of deterministic responses, the gPCE requires prohibitive computational time for large-size finite element models, large number of uncertain parameters and presence of nonlinearities. To overcome this issue, a first metamodel is created by combining the gPCE and a ROM based on the enrichment of the truncated Ritz basis using static residuals taking into account the stochastic and nonlinear effects. The extension to the Craig–Bampton approach leads to a second metamodel.

Findings

Implementing the metamodels to approximate the time responses of a frame and a coupled micro-beams structure containing localized nonlinearities and stochastic parameters permits to significantly reduce computation cost with acceptable loss of accuracy, with respect to the reference Latin Hypercube Sampling method.

Originality/value

The proposed combination of the gPCE and the ROMs leads to a computationally efficient and accurate tool for robust design in the presence of parametric uncertainties and localized nonlinearities.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 13 August 2019

Xiaosong Du and Leifur Leifsson

Model-assisted probability of detection (MAPOD) is an important approach used as part of assessing the reliability of nondestructive testing systems. The purpose of this paper is…

Abstract

Purpose

Model-assisted probability of detection (MAPOD) is an important approach used as part of assessing the reliability of nondestructive testing systems. The purpose of this paper is to apply the polynomial chaos-based Kriging (PCK) metamodeling method to MAPOD for the first time to enable efficient uncertainty propagation, which is currently a major bottleneck when using accurate physics-based models.

Design/methodology/approach

In this paper, the state-of-the-art Kriging, polynomial chaos expansions (PCE) and PCK are applied to “a^ vs a”-based MAPOD of ultrasonic testing (UT) benchmark problems. In particular, Kriging interpolation matches the observations well, while PCE is capable of capturing the global trend accurately. The proposed UP approach for MAPOD using PCK adopts the PCE bases as the trend function of the universal Kriging model, aiming at combining advantages of both metamodels.

Findings

To reach a pre-set accuracy threshold, the PCK method requires 50 per cent fewer training points than the PCE method, and around one order of magnitude fewer than Kriging for the test cases considered. The relative differences on the key MAPOD metrics compared with those from the physics-based models are controlled within 1 per cent.

Originality/value

The contributions of this work are the first application of PCK metamodel for MAPOD analysis, the first comparison between PCK with the current state-of-the-art metamodels for MAPOD and new MAPOD results for the UT benchmark cases.

Details

Engineering Computations, vol. 37 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 23