Search results

1 – 10 of over 16000
Article
Publication date: 14 November 2008

Victor M. Pérez, John E. Renaud and Layne T. Watson

To reduce the computational complexity per step from O(n2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables.

Abstract

Purpose

To reduce the computational complexity per step from O(n2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables.

Design/methodology/approach

Applying nonlinear optimization strategies directly to complex multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations (RSAs), and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, RSAs provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n2) to O(n) is presented.

Findings

The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the RSA and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm.

Originality/value

A reduction in the time (number of system analyses) required per step from O(n2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n2) system analyses.

Details

Engineering Computations, vol. 25 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 February 2003

Jayantha Pasdunkorale A. and Ian W. Turner

An existing two‐dimensional finite volume technique is modified by introducing a correction term to increase the accuracy of the method to second order. It is well known that the…

Abstract

An existing two‐dimensional finite volume technique is modified by introducing a correction term to increase the accuracy of the method to second order. It is well known that the accuracy of the finite volume method strongly depends on the order of the approximation of the flux term at the control volume (CV) faces. For highly orthotropic and anisotropic media, first order approximations produce inaccurate simulation results, which motivates the need for better estimates of the flux expression. In this article, a new approach to approximate the flux term at the CV face is presented. The discretisation involves a decomposition of the flux and an improved least squares approximation technique to calculate the derivatives of the dependent function on the CV faces for estimating both the cross diffusion term and a correction for the primary flux term. The advantage of this method is that any arbitrary unstructured mesh can be used to implement the technique without considering the shapes of the mesh elements. It was found that the numerical results well matched the available exact solution for a representative transport equation in highly orthotropic media and the benchmark solutions obtained on a fine mesh for anisotropic media. Previously proposed CV techniques are compared with the new method to highlight its accuracy for different unstructured meshes.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 13 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

Book part
Publication date: 21 November 2014

Yong Bao, Aman Ullah and Ru Zhang

An extensive literature in econometrics focuses on finding the exact and approximate first and second moments of the least-squares estimator in the stable first-order linear…

Abstract

An extensive literature in econometrics focuses on finding the exact and approximate first and second moments of the least-squares estimator in the stable first-order linear autoregressive model with normally distributed errors. Recently, Kiviet and Phillips (2005) developed approximate moments for the linear autoregressive model with a unit root and normally distributed errors. An objective of this paper is to analyze moments of the estimator in the first-order autoregressive model with a unit root and nonnormal errors. In particular, we develop new analytical approximations for the first two moments in terms of model parameters and the distribution parameters. Through Monte Carlo simulations, we find that our approximate formula perform quite well across different distribution specifications in small samples. However, when the noise to signal ratio is huge, bias distortion can be quite substantial, and our approximations do not fare well.

Details

Essays in Honor of Peter C. B. Phillips
Type: Book
ISBN: 978-1-78441-183-1

Keywords

Article
Publication date: 1 March 1999

M.J. García‐Ruíz and G.P. Steven

A fixed grid representation of the finite element domain is used to solve elasticity problems. An integration between CAD and FEM systems is made by using this fixed grid…

Abstract

A fixed grid representation of the finite element domain is used to solve elasticity problems. An integration between CAD and FEM systems is made by using this fixed grid representation. Some considerations about FG approximation and stiffness matrix generation are presented. The element stiffness matrix for an element in the mesh is obtained as a factor of a standard element stiffness matrix. A least squares local approximation of stress field is used to calculate the stress at boundary elements. A problem with an analytical solution is used to measure the displacement and stress error. Several meshes and several geometrical configurations of the problem are used to test the reliability of the fixed grid finite element method.

Details

Engineering Computations, vol. 16 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 10 May 2019

Rituraj Singh and Krishna Mohan Singh

The purpose of this paper is to assess the performance of the stabilised moving least squares (MLS) scheme in the meshless local Petrov–Galerkin (MLPG) method for heat conduction…

Abstract

Purpose

The purpose of this paper is to assess the performance of the stabilised moving least squares (MLS) scheme in the meshless local Petrov–Galerkin (MLPG) method for heat conduction method.

Design/methodology/approach

In the current work, the authors extend the stabilised MLS approach to the MLPG method for heat conduction problem. Its performance has been compared with the MLPG method based on the standard MLS and local coordinate MLS. The patch tests of MLS and modified MLS schemes have been presented along with the one- and two-dimensional examples for MLPG method of the heat conduction problem.

Findings

In the stabilised MLS, the condition number of moment matrix is independent of the nodal spacing and it is nearly constant in the global domain for all grid sizes. The shifted polynomials based MLS and stabilised MLS approaches are more robust than the standard MLS scheme in the MLPG method analysis of heat conduction problems.

Originality/value

The MLPG method based on the stabilised MLS scheme.

Details

Engineering Computations, vol. 36 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 February 2001

J.Y. Cho and S.N. Atluri

The problems of shear flexible beams are analyzed by the MLPG method based on a locking‐free weak formulation. In order for the weak formulation to be locking‐free, the numerical…

Abstract

The problems of shear flexible beams are analyzed by the MLPG method based on a locking‐free weak formulation. In order for the weak formulation to be locking‐free, the numerical characteristics of the variational functional for a shear flexible beam, in the thin beam limit, are discussed. Based on these discussions a locking‐free local symmetric weak form is derived by changing the set of two dependent variables in governing equations from that of transverse displacement and total rotation to the set of transverse displacement and transverse shear strain. For the interpolation of the chosen set of dependent variables (i.e. transverse displacement and transverse shear strain) in the locking‐free local symmetric weak form, the recently proposed generalized moving least squares (GMLS) interpolation scheme is utilized, in order to introduce the derivative of the transverse displacement as an additional nodal degree of freedom, independent of the nodal transverse displacement. Through numerical examples, convergence tests are performed. To identify the locking‐free nature of the proposed method, problems of shear flexible beams in the thick beam limit and in the thin beam limit are analyzed, and the numerical results are compared with analytical solutions. The potential of using the truly meshless local Petrov‐Galerkin (MLPG) method is established as a new paradigm in totally locking‐free computational analyses of shear flexible plates and shells.

Details

Engineering Computations, vol. 18 no. 1/2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 November 2008

B.N. Rao and Rajib Chowdhury

To develop a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry.

1811

Abstract

Purpose

To develop a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry.

Design/methodology/approach

High dimensional model representation (HDMR) is a general set of quantitative model assessment and analysis tools for capturing the high‐dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher order variable correlations are weak and if the response function is dominantly of additive nature, allowing the physical model to be captured by the first few lower order terms. But, if multiplicative nature of the response function is dominant then all right hand side components of HDMR must be used to be able to obtain the best result. However, if HDMR requires all components, which means 2N number of components, to get a desired accuracy, making the method very expensive in practice, then factorized HDMR (FHDMR) can be used. The component functions of FHDMR are determined by using the component functions of HDMR. This paper presents the formulation of FHDMR approximation of a multivariate limit state/performance function, which is dominantly of multiplicative nature. Given that conventional methods for reliability analysis are very computationally demanding, when applied in conjunction with complex finite element models. This study aims to assess how accurately and efficiently HDMR/FHDMR based approximation techniques can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which is recently applied to reliability analysis, is also demonstrated. Response surface is constructed using moving least squares interpolation formula by including constant, first‐order and second‐order terms of HDMR and FHDMR. Once the response surface form is defined, the failure probability can be obtained by statistical simulation.

Findings

Results of five numerical examples involving structural/solid‐mechanics/geo‐technical engineering problems indicate that the failure probability obtained using FHDMR approximation for the limit state/performance function of dominantly multiplicative in nature, provides significant accuracy when compared with the conventional Monte Carlo method, while requiring fewer original model simulations.

Originality/value

This is the first time where application of FHDMR concepts is explored in the field of reliability and system safety. Present computational approach is valuable to the practical modeling and design community, where user often suffers from the curse of dimensionality.

Details

Engineering Computations, vol. 25 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 1995

G.F. Carey and Y. Shen

A leastsquares finite element analysis of viscous fluidflow together with a trajectory integration technique fortracers is formulated and provides a mechanism for…

Abstract

A leastsquares finite element analysis of viscous fluid flow together with a trajectory integration technique for tracers is formulated and provides a mechanism for investigating mixing. Tracer integration is carried out using an improved Heun predictor‐corrector. Results from our supporting numerical studies on the CRAY and Connection Machine (CM) closely resemble the patterns of mixing observed in experiments. A “box‐counting” scheme and other measures to characterize the level of mixing are developed and investigated. This measure is utilized in numerical experiments to determine an optimal forcing frequency for mixing by periodic boundary motion in a rectangular enclosure. Some details concerning the numerical schemes and vector‐parallel implementation are also included.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 5 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 28 March 2008

József Valyon and Gábor Horváth

The purpose of this paper is to present extended least squares support vector machines (LS‐SVM) where data selection methods are used to get sparse LS‐SVM solution, and to…

Abstract

Purpose

The purpose of this paper is to present extended least squares support vector machines (LS‐SVM) where data selection methods are used to get sparse LS‐SVM solution, and to overview and compare the most important data selection approaches.

Design/methodology/approach

The selection methods are compared based on their theoretical background and using extensive simulations.

Findings

The paper shows that partial reduction is an efficient way of getting a reduced complexity sparse LS‐SVM solution, while partial reduction exploits full knowledge contained in the whole training data set. It also shows that the reduction technique based on reduced row echelon form (RREF) of the kernel matrix is superior when compared to other data selection approaches.

Research limitations/implications

Data selection for getting a sparse LS‐SVM solution can be done in the different representations of the training data: in the input space, in the intermediate feature space, and in the kernel space. Selection in the kernel space can be obtained by finding an approximate basis of the kernel matrix.

Practical implications

The RREF‐based method is a data selection approach with a favorable property: there is a trade‐off tolerance parameter that can be used for balancing complexity and accuracy.

Originality/value

The paper gives contributions to the construction of high‐performance and moderate complexity LS‐SVMs.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 1 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 23 June 2016

Bao Yong, Fan Yanqin, Su Liangjun and Zinde-Walsh Victoria

This paper examines Aman Ullah’s contributions to robust inference, finite sample econometrics, nonparametrics and semiparametrics, and panel and spatial models. His early works…

Abstract

This paper examines Aman Ullah’s contributions to robust inference, finite sample econometrics, nonparametrics and semiparametrics, and panel and spatial models. His early works on robust inference and finite sample theory were mostly motivated by his thesis advisor, Professor Anirudh Lal Nagar. They eventually led to his most original rethinking of many statistics and econometrics models that developed into the monograph Finite Sample Econometrics published in 2004. His desire to relax distributional and functional-form assumptions lead him in the direction of nonparametric estimation and he summarized his views in his most influential textbook Nonparametric Econometrics (with Adrian Pagan) published in 1999 that has influenced a whole generation of econometricians. His innovative contributions in the areas of seemingly unrelated regressions, parametric, semiparametric and nonparametric panel data models, and spatial models have also inspired a larger literature on nonparametric and semiparametric estimation and inference and spurred on research in robust estimation and inference in these and related areas.

1 – 10 of over 16000