Search results

1 – 10 of 884
Article
Publication date: 18 March 2021

Jinsheng Wang, Muhannad Aldosary, Song Cen and Chenfeng Li

Normal transformation is often required in structural reliability analysis to convert the non-normal random variables into independent standard normal variables. The existing…

Abstract

Purpose

Normal transformation is often required in structural reliability analysis to convert the non-normal random variables into independent standard normal variables. The existing normal transformation techniques, for example, Rosenblatt transformation and Nataf transformation, usually require the joint probability density function (PDF) and/or marginal PDFs of non-normal random variables. In practical problems, however, the joint PDF and marginal PDFs are often unknown due to the lack of data while the statistical information is much easier to be expressed in terms of statistical moments and correlation coefficients. This study aims to address this issue, by presenting an alternative normal transformation method that does not require PDFs of the input random variables.

Design/methodology/approach

The new approach, namely, the Hermite polynomial normal transformation, expresses the normal transformation function in terms of Hermite polynomials and it works with both uncorrelated and correlated random variables. Its application in structural reliability analysis using different methods is thoroughly investigated via a number of carefully designed comparison studies.

Findings

Comprehensive comparisons are conducted to examine the performance of the proposed Hermite polynomial normal transformation scheme. The results show that the presented approach has comparable accuracy to previous methods and can be obtained in closed-form. Moreover, the new scheme only requires the first four statistical moments and/or the correlation coefficients between random variables, which greatly widen the applicability of normal transformations in practical problems.

Originality/value

This study interprets the classical polynomial normal transformation method in terms of Hermite polynomials, namely, Hermite polynomial normal transformation, to convert uncorrelated/correlated random variables into standard normal random variables. The new scheme only requires the first four statistical moments to operate, making it particularly suitable for problems that are constraint by limited data. Besides, the extension to correlated cases can easily be achieved with the introducing of the Hermite polynomials. Compared to existing methods, the new scheme is cheap to compute and delivers comparable accuracy.

Details

Engineering Computations, vol. 38 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 20 May 2019

Yunfei Zu, Wenliang Fan, Jingyao Zhang, Zhengling Li and Makoto Ohsaki

Conversion of the correlated random variables into independent variables, especially into independent standard normal variables, is the common technology for estimating the…

Abstract

Purpose

Conversion of the correlated random variables into independent variables, especially into independent standard normal variables, is the common technology for estimating the statistical moments of response and evaluating reliability of random system, in which calculating the equivalent correlation coefficient is an important component. The purpose of this paper is to investigate an accurate, efficient and easy to implement estimation method for the equivalent correlation coefficient of various incomplete probability systems.

Design/methodology/approach

First, an approach based on the Mehler’s formula for evaluating the equivalent correlation coefficient is introduced, then, by combining with polynomial normal transformations, this approach is improved to be valid for various incomplete probability systems, which is named as the direct method. Next, with the convenient linear reference variables for eight frequently used random variables and the approximation of the Rosenblatt transformation introduced, a further improved implementation without iteration process is developed, which is named as the simplified method. Finally, several examples are investigated to verify the characteristics of the proposed methods.

Findings

The results of the examples in this paper show that both the proposed two methods are of high accuracy, by comparison, the proposed simplified method is more effective and convenient.

Originality/value

Based on the Mehler’s formula, two practical implementations for evaluating the equivalent correlation coefficient are proposed, which are accurate, efficient, easy to implement and valid for various incomplete probability systems.

Article
Publication date: 28 February 2023

Jinsheng Wang, Zhiyang Cao, Guoji Xu, Jian Yang and Ahsan Kareem

Assessing the failure probability of engineering structures is still a challenging task in the presence of various uncertainties due to the involvement of expensive-to-evaluate…

194

Abstract

Purpose

Assessing the failure probability of engineering structures is still a challenging task in the presence of various uncertainties due to the involvement of expensive-to-evaluate computational models. The traditional simulation-based approaches require tremendous computational effort, especially when the failure probability is small. Thus, the use of more efficient surrogate modeling techniques to emulate the true performance function has gained increasingly more attention and application in recent years. In this paper, an active learning method based on a Kriging model is proposed to estimate the failure probability with high efficiency and accuracy.

Design/methodology/approach

To effectively identify informative samples for the enrichment of the design of experiments, a set of new learning functions is proposed. These learning functions are successfully incorporated into a sampling scheme, where the candidate samples for the enrichment are uniformly distributed in the n-dimensional hypersphere with an iteratively updated radius. To further improve the computational efficiency, a parallelization strategy that enables the proposed algorithm to select multiple sample points in each iteration is presented by introducing the K-means clustering algorithm. Hence, the proposed method is referred to as the adaptive Kriging method based on K-means clustering and sampling in n-Ball (AK-KBn).

Findings

The performance of AK-KBn is evaluated through several numerical examples. According to the generated results, all the proposed learning functions are capable of guiding the search toward sample points close to the LSS in the critical region and result in a converged Kriging model that perfectly matches the true one in the regions of interest. The AK-KBn method is demonstrated to be well suited for structural reliability analysis and a very good performance is observed in the investigated examples.

Originality/value

In this study, the statistical information of Kriging prediction, the relative contribution of the sample points to the failure probability and the distances between the candidate samples and the existing ones are all integrated into the proposed learning functions, which enables effective selection of informative samples for updating the Kriging model. Moreover, the number of required iterations is reduced by introducing the parallel computing strategy, which can dramatically alleviate the computation cost when time demanding numerical models are involved in the analysis.

Details

Engineering Computations, vol. 40 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 January 2017

Meghdad Tourandaz Kenari, Mohammad Sadegh Sepasian and Mehrdad Setayesh Nazar

The purpose of this paper is to present a new cumulant-based method, based on the properties of saddle-point approximation (SPA), to solve the probabilistic load flow (PLF…

Abstract

Purpose

The purpose of this paper is to present a new cumulant-based method, based on the properties of saddle-point approximation (SPA), to solve the probabilistic load flow (PLF) problem for distribution networks with wind generation.

Design/methodology/approach

This technique combines cumulant properties with the SPA to improve the analytical approach of PLF calculation. The proposed approach takes into account the load demand and wind generation uncertainties in distribution networks, where a suitable probabilistic model of wind turbine (WT) is used.

Findings

The proposed procedure is applied to IEEE 33-bus distribution test system, and the results are discussed. The output variables, with and without WT connection, are presented for normal and gamma random variables (RVs). The case studies demonstrate that the proposed method gives accurate results with relatively low computational burden even for non-Gaussian probability density functions.

Originality/value

The main contribution of this paper is the use of SPA for the reconstruction of probability density function or cumulative distribution function in the PLF problem. To confirm the validity of the method, results are compared with Monte Carlo simulation and Gram–Charlier expansion results. From the viewpoint of accuracy and computational cost, SPA almost surpasses other approximations for obtaining the cumulative distribution function of the output RVs.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 36 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 11 May 2022

Xiangqian Sheng, Wenliang Fan, Qingbin Zhang and Zhengling Li

The polynomial dimensional decomposition (PDD) method is a popular tool to establish a surrogate model in several scientific areas and engineering disciplines. The selection of…

Abstract

Purpose

The polynomial dimensional decomposition (PDD) method is a popular tool to establish a surrogate model in several scientific areas and engineering disciplines. The selection of appropriate truncated polynomials is the main topic in the PDD. In this paper, an easy-to-implement adaptive PDD method with a better balance between precision and efficiency is proposed.

Design/methodology/approach

First, the original random variables are transformed into corresponding independent reference variables according to the statistical information of variables. Second, the performance function is decomposed as a summation of component functions that can be approximated through a series of orthogonal polynomials. Third, the truncated maximum order of the orthogonal polynomial functions is determined through the nonlinear judgment method. The corresponding expansion coefficients are calculated through the point estimation method. Subsequently, the performance function is reconstructed through appropriate orthogonal polynomials and known expansion coefficients.

Findings

Several examples are investigated to illustrate the accuracy and efficiency of the proposed method compared with the other methods in reliability analysis.

Originality/value

The number of unknown coefficients is significantly reduced, and the computational burden for reliability analysis is eased accordingly. The coefficient evaluation for the multivariate component function is decoupled with the order judgment of the variable. The proposed method achieves a good trade-off of efficiency and accuracy for reliability analysis.

Details

Engineering Computations, vol. 39 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 May 1993

GEORGE MEJAK

A triangular composite element with 15 degrees of freedom is introduced. It is shown that after an appropriate modification of the Zlámal coordinate transformation this element…

Abstract

A triangular composite element with 15 degrees of freedom is introduced. It is shown that after an appropriate modification of the Zlámal coordinate transformation this element can be employed as a reference element to curvilinear triangular element of class C with nine degrees of freedom. A complete list of the basis functions, together with a procedure for their automatic generation are included.

Details

Engineering Computations, vol. 10 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 October 2020

Binesh Thankappan

This paper aims to present a special transformation that is applied to univariable polynomials of an arbitrary order, resulting in the generation of the proposed offset eliminated…

Abstract

Purpose

This paper aims to present a special transformation that is applied to univariable polynomials of an arbitrary order, resulting in the generation of the proposed offset eliminated polynomial. This transform-based approach is used in the analysis and synthesis of temporal arc functions, which are time domain polynomial functions possessing two or more values simultaneously. Using the proposed transform, the submerged values of temporal arcs can also be extracted in measurements.

Design/methodology/approach

The methodology involves a two-step mathematical procedure in which the proposed transform of the weighted modified derivative of the polynomial is generated, followed by multiplication with a linear or ramp function. The transform introduces a stretching in the temporal or spatial domain depending on the type of variable under consideration, resulting in modifications for parameters such as time derivative and relative velocity.

Findings

Detailed analysis of various parameters in this modified time domain is performed and results are presented. Additionally, using the proposed methodology, the submerged value of any temporal arc function can also be extracted in measurements, thereby unraveling the temporal arc.

Practical implications

A typical implementation study with results is also presented for an operational amplifier-based temporal arc-producing square rooting circuit for the extraction of the submerged value of the function.

Originality/value

The proposed transform-based approach has major applications in extracting the values of temporal arc functions that are submerged in conventional experimental measurements, thereby providing a novel method in unraveling that class of special functions.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 39 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 August 2019

Djamel Boutagouga

This paper aims to describe the formulation of a displacement-based triangular membrane finite element with true drilling rotational degree of freedom (DOF).

Abstract

Purpose

This paper aims to describe the formulation of a displacement-based triangular membrane finite element with true drilling rotational degree of freedom (DOF).

Design/methodology/approach

The presented formulation incorporates the true drilling rotation provided by continuum mechanics into the displacement field by way of using the polynomial interpolation. Unlike the linked interpolation, that uses a geometric transformation between displacement and vertex rotations, in this work, the interpolation of the displacement field in terms of nodal drilling rotations is obtained following an unusual approach that does not imply any presumed geometric transformation.

Findings

New relationship linking the mid-side normal displacement to corner node drilling rotations is derived. The resulting new element with true drilling rotation is compatible and does not include any problem-dependent parameter that may influence the results. The spurious zero-energy mode is stabilized in a careful way that preserves the true drilling rotational degrees of freedom (DOFs).

Originality/value

Several works dealing with membrane elements with vertex rotational DOFs have been published with improved convergence rate, however, owing to the need for incorporating rotations in the finite element meshes involving solids, shells and beam elements, having finite elements with true drilling rotational DOFs is more appreciated.

Article
Publication date: 1 February 1992

Maqsood A. CHAUDHRY

An extension of the Schwarz‐Christoffel transformation is described to formally map polygons which contain curved boundaries. The curved boundaries are divided into small ‘curved…

Abstract

An extension of the Schwarz‐Christoffel transformation is described to formally map polygons which contain curved boundaries. The curved boundaries are divided into small ‘curved elements’ and each element is approximated by a second degree polynomial (higher degree polynomials can also be used). The iterative algorithm of evaluating the unknown constants of the basic S‐C transformation described in a companion paper is applied to the extended S‐C transformation to compute its unknown constants, including the coefficients of the polynomials. Excellent results are achieved as far as accuracy and convergence are concerned. Examples including a practical application, are provided. The mapping of curved polygons is important because they provide a better model of a physical device.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 11 no. 2
Type: Research Article
ISSN: 0332-1649

1 – 10 of 884