Search results

1 – 10 of 260
Article
Publication date: 20 May 2019

Yunfei Zu, Wenliang Fan, Jingyao Zhang, Zhengling Li and Makoto Ohsaki

Conversion of the correlated random variables into independent variables, especially into independent standard normal variables, is the common technology for estimating the…

Abstract

Purpose

Conversion of the correlated random variables into independent variables, especially into independent standard normal variables, is the common technology for estimating the statistical moments of response and evaluating reliability of random system, in which calculating the equivalent correlation coefficient is an important component. The purpose of this paper is to investigate an accurate, efficient and easy to implement estimation method for the equivalent correlation coefficient of various incomplete probability systems.

Design/methodology/approach

First, an approach based on the Mehler’s formula for evaluating the equivalent correlation coefficient is introduced, then, by combining with polynomial normal transformations, this approach is improved to be valid for various incomplete probability systems, which is named as the direct method. Next, with the convenient linear reference variables for eight frequently used random variables and the approximation of the Rosenblatt transformation introduced, a further improved implementation without iteration process is developed, which is named as the simplified method. Finally, several examples are investigated to verify the characteristics of the proposed methods.

Findings

The results of the examples in this paper show that both the proposed two methods are of high accuracy, by comparison, the proposed simplified method is more effective and convenient.

Originality/value

Based on the Mehler’s formula, two practical implementations for evaluating the equivalent correlation coefficient are proposed, which are accurate, efficient, easy to implement and valid for various incomplete probability systems.

Article
Publication date: 7 August 2017

Wenliang Fan, Pengchao Yang, Yule Wang, Alfredo H.-S. Ang and Zhengliang Li

The purpose of this paper is to find an accurate, efficient and easy-to-implement point estimate method (PEM) for the statistical moments of random systems.

Abstract

Purpose

The purpose of this paper is to find an accurate, efficient and easy-to-implement point estimate method (PEM) for the statistical moments of random systems.

Design/methodology/approach

First, by the theoretical and numerical analysis, the approximate reference variables for the frequently used nine types of random variables are obtained; then by combining with the dimension-reduction method (DRM), a new method which consists of four sub-methods is proposed; and finally, several examples are investigated to verify the characteristics of the proposed method.

Findings

Two types of reference variables for the frequently used nine types of variables are proposed, and four sub-methods for estimating the moments of responses are presented by combining with the univariate and bivariate DRM.

Research limitations/implications

In this paper, the number of nodes of one-dimensional integrals is determined subjectively and empirically; therefore, determining the number of nodes rationally is still a challenge.

Originality/value

Through the linear transformation, the optimal reference variables of random variables are presented, and a PEM based on the linear transformation is proposed which is efficient and easy to implement. By the numerical method, the quasi-optimal reference variables are given, which is the basis of the proposed PEM based on the quasi-optimal reference variables, together with high efficiency and ease of implementation.

Article
Publication date: 18 March 2021

Jinsheng Wang, Muhannad Aldosary, Song Cen and Chenfeng Li

Normal transformation is often required in structural reliability analysis to convert the non-normal random variables into independent standard normal variables. The existing…

Abstract

Purpose

Normal transformation is often required in structural reliability analysis to convert the non-normal random variables into independent standard normal variables. The existing normal transformation techniques, for example, Rosenblatt transformation and Nataf transformation, usually require the joint probability density function (PDF) and/or marginal PDFs of non-normal random variables. In practical problems, however, the joint PDF and marginal PDFs are often unknown due to the lack of data while the statistical information is much easier to be expressed in terms of statistical moments and correlation coefficients. This study aims to address this issue, by presenting an alternative normal transformation method that does not require PDFs of the input random variables.

Design/methodology/approach

The new approach, namely, the Hermite polynomial normal transformation, expresses the normal transformation function in terms of Hermite polynomials and it works with both uncorrelated and correlated random variables. Its application in structural reliability analysis using different methods is thoroughly investigated via a number of carefully designed comparison studies.

Findings

Comprehensive comparisons are conducted to examine the performance of the proposed Hermite polynomial normal transformation scheme. The results show that the presented approach has comparable accuracy to previous methods and can be obtained in closed-form. Moreover, the new scheme only requires the first four statistical moments and/or the correlation coefficients between random variables, which greatly widen the applicability of normal transformations in practical problems.

Originality/value

This study interprets the classical polynomial normal transformation method in terms of Hermite polynomials, namely, Hermite polynomial normal transformation, to convert uncorrelated/correlated random variables into standard normal random variables. The new scheme only requires the first four statistical moments to operate, making it particularly suitable for problems that are constraint by limited data. Besides, the extension to correlated cases can easily be achieved with the introducing of the Hermite polynomials. Compared to existing methods, the new scheme is cheap to compute and delivers comparable accuracy.

Details

Engineering Computations, vol. 38 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 October 2015

Xiaoke Li, Haobo Qiu, Zhenzhong Chen, Liang Gao and Xinyu Shao

Kriging model has been widely adopted to reduce the high computational costs of simulations in Reliability-based design optimization (RBDO). To construct the Kriging model…

488

Abstract

Purpose

Kriging model has been widely adopted to reduce the high computational costs of simulations in Reliability-based design optimization (RBDO). To construct the Kriging model accurately and efficiently in the region of significance, a local sampling method with variable radius (LSVR) is proposed. The paper aims to discuss these issues.

Design/methodology/approach

In LSVR, the sequential sampling points are mainly selected within the local region around the current design point. The size of the local region is adaptively defined according to the target reliability and the nonlinearity of the probabilistic constraint. Every probabilistic constraint has its own local region instead of all constraints sharing one local region. In the local sampling region, the points located on the constraint boundary and the points with high uncertainty are considered simultaneously.

Findings

The computational capability of the proposed method is demonstrated using two mathematical problems, a reducer design and a box girder design of a super heavy machine tool. The comparison results show that the proposed method is very efficient and accurate.

Originality/value

The main contribution of this paper lies in: a new local sampling region computational criterion is proposed for Kriging. The originality of this paper is using expected feasible function (EFF) criterion and the shortest distance to the existing sample points instead of the other types of sequential sampling criterion to deal with the low efficiency problem.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 11 October 2023

Xiongming Lai, Yuxin Chen, Yong Zhang and Cheng Wang

The paper proposed a fast procedure for solving the reliability-based robust design optimization (RBRDO) by modifying the RBRDO formulation and transforming it into a series of…

Abstract

Purpose

The paper proposed a fast procedure for solving the reliability-based robust design optimization (RBRDO) by modifying the RBRDO formulation and transforming it into a series of RBRDO subproblems. Then for each subproblem, the objective function, constraint function and reliability index are approximated using Taylor series expansion, and their approximate forms depend on the deterministic design vector rather than the random vector and the uncertain estimation in the inner loop of RBRDO can be avoided. In this way, it can greatly reduce the evaluation number of performance function. Lastly, the trust region method is used to manage the above sequential RBRDO subproblems for convergence.

Design/methodology/approach

As is known, RBRDO is nested optimization, where the outer loop updates the design vector and the inner loop estimate the uncertainties. When solving the RBRDO, a large evaluation number of performance functions are needed. Aiming at this issue, the paper proposed a fast integrated procedure for solving the RBRDO by reducing the evaluation number for the performance functions. First, it transforms the original RBRDO problem into a series of RBRDO subproblems. In each subproblem, the objective function, constraint function and reliability index caused are approximated using simple explicit functions that solely depend on the deterministic design vector rather than the random vector. In this way, the need for extensive sampling simulation in the inner loop is greatly reduced. As a result, the evaluation number for performance functions is significantly reduced, leading to a substantial reduction in computation cost. The trust region method is then employed to handle the sequential RBRDO subproblems, ensuring convergence to the optimal solutions. Finally, the engineering test and the application are presented to illustrate the effectiveness and efficiency of the proposed methods.

Findings

The paper proposes a fast procedure of solving the RBRDO can greatly reduce the evaluation number of performance function within the RBRDO and the computation cost can be saved greatly, which makes it suitable for engineering applications.

Originality/value

The standard deviation of the original objective function of the RBRDO is replaced by the mean and the reliability index of the original objective function, which are further approximated by using Taylor series expansion and their approximate forms depend on the deterministic design vector rather than the random vector. Moreover, the constraint functions are also approximated by using Taylor series expansion. In this way, the uncertainty estimation of the performance functions (i.e. the mean of the objective function, the constraint functions) and the reliability index of the objective function are avoided within the inner loop of the RBRDO.

Details

International Journal of Structural Integrity, vol. 14 no. 6
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 22 March 2022

Zhanpeng Shen, Chaoping Zang, Xueqian Chen, Shaoquan Hu and Xin-en Liu

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these…

Abstract

Purpose

For fast calculation of complex structure in engineering, correlations among input variables are often ignored in uncertainty propagation, even though the effect of ignoring these correlations on the output uncertainty is unclear. This paper aims to quantify the inputs uncertainty and estimate the correlations among them acorrding to the collected observed data instead of questionable assumptions. Moreover, the small size of the experimental data should also be considered, as it is such a common engineering problem.

Design/methodology/approach

In this paper, a novel method of combining p-box with copula function for both uncertainty quantification and correlation estimation is explored. Copula function is utilized to estimate correlations among uncertain inputs based upon the observed data. The p-box method is employed to quantify the input uncertainty as well as the epistemic uncertainty associated with the limited amount of the observed data. Nested Monte Carlo sampling technique is adopted herein to ensure that the propagation is always feasible. In addition, a Kriging model is built up to reduce the computational cost of uncertainty propagation.

Findings

To illustrate the application of this method, an engineering example of structural reliability assessment is performed. The results indicate that it may significantly affect output uncertainty whether to quantify the correlation among input variables. Furthermore, an additional advantage for risk management is obtained in this approach due to the separation of aleatory and epistemic uncertainties.

Originality/value

The proposed method takes advantage of p-box and copula function to deal with the correlations and limited amount of the observed data, which are two important issues of uncertainty quantification in engineering. Thus, it is practical and has the ability to predict accurate response uncertainty or system state.

Details

Engineering Computations, vol. 39 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 November 2023

Yingguang Wang

The purpose of this paper is to exploit a new and robust method to forecast the long-term extreme dynamic responses for wave energy converters (WECs).

Abstract

Purpose

The purpose of this paper is to exploit a new and robust method to forecast the long-term extreme dynamic responses for wave energy converters (WECs).

Design/methodology/approach

A new adaptive binned kernel density estimation (KDE) methodology is first proposed in this paper.

Findings

By examining the calculation results the authors has found that in the tail region the proposed new adaptive binned KDE distribution curve becomes very smooth and fits quite well with the histogram of the measured ocean wave dataset at the National Data Buoy Center (NDBC) station 46,059. Carefully studying the calculation results also reveals that the 50-year extreme power-take-off heaving force value forecasted based on the environmental contour derived using the new method is 3572600N, which is much larger than the value 2709100N forecasted via the Rosenblatt-inverse second-order reliability method (ISORM) contour method.

Research limitations/implications

The proposed method overcomes the disadvantages of all the existing nonparametric and parametric methods for predicting the tail region probability density values of the sea state parameters.

Originality/value

It is concluded that the proposed new adaptive binned KDE method is robust and can forecast well the 50-year extreme dynamic responses for WECs.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 July 2009

Yuan Mao Huang and Ching‐Shin Shiau

The purpose of this paper is to provide an optimal tolerance allocation model for assemblies with consideration of the manufacturing cost, the quality loss, the design reliability…

Abstract

Purpose

The purpose of this paper is to provide an optimal tolerance allocation model for assemblies with consideration of the manufacturing cost, the quality loss, the design reliability index with various distributions to enhance existing models. Results of two case studies are presented.

Design/methodology/approach

The paper develops a model with consideration of the manufacturing cost, the Taguchi's asymmetric quadratic quality loss and the design reliability index for the optimal tolerance allocation of assemblies. The dimensional variables in normal distributions are initially used as testing and compared with the data from the prior researches. Then, the dimensional variables in lognormal distributions with the mean shift and the correlation are applied and investigated.

Findings

The results obtained based on a lognormal distribution and a normal distribution of the dimension are similar, but the tolerance with a lognormal distribution is little smaller than that with a normal distribution. The result of the reliability with the lognormal distribution obtained by the Monte‐Carlo is higher than that with a normal distribution. This paper shows that effects of the mean shift, the correlation coefficient and the replacement cost on the cost are significant and designers should pay attention to them during the tolerance optimization. The optimum tolerances of components of a compressor are recommended.

Research limitations/implications

The model is limited to the dimensions of components with the normal distribution and lognormal distributions. The implication should be enhanced with more data of dimension distributions and cost of assembly components.

Practical implications

Two case studies are presented. One is an assembly of two pieces and another is a compressor with many components.

Originality/value

This model provides an optimal tolerance allocation method for assemblies with the lowest manufacturing cost, the minimum quality loss, and the required reliability index for the normal distribution and lognormal distribution.

Details

Assembly Automation, vol. 29 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 October 2012

I. Doltsinis

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent…

Abstract

Purpose

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent parameters.

Design/methodology/approach

In total, two approaches are distinguished that rely on solvers from deterministic algorithms. Probabilistic analysis is referred to as the approximation of the response by a Taylor series expansion about the mean input. Alternatively, stochastic simulation implies random sampling of the input and statistical evaluation of the output.

Findings

Beyond the characterization of random response, methods of reliability assessment are discussed. Concepts of design improvement are presented. Optimization for robustness diminishes the sensitivity of the system to fluctuating parameters.

Practical implications

Deterministic algorithms available for the primary problem are utilized for stochastic analysis by statistical Monte Carlo sampling. The computational effort for the repeated solution of the primary problem depends on the variability of the system and is usually high. Alternatively, the analytic Taylor series expansion requires extension of the primary solver to the computation of derivatives of the response with respect to the random input. The method is restricted to the computation of output mean values and variances/covariances, with the effort determined by the amount of the random input. The results of the two methods are comparable within the domain of applicability.

Originality/value

The present account addresses the main issues related to the presence of randomness in engineering systems and processes. They comprise the analysis of stochastic systems, reliability, design improvement, optimization and robustness against randomness of the data. The analytical Taylor approach is contrasted to the statistical Monte Carlo sampling throughout. In both cases, algorithms known from the primary, deterministic problem are the starting point of stochastic treatment. The reader benefits from the comprehensive presentation of the matter in a concise manner.

1 – 10 of 260