Search results

1 – 10 of 227
Article
Publication date: 20 August 2019

Xiaoxiao Liu and Ming Liu

Corrosion is one of the common damage mechanisms in many engineering structures such as marine structures, petroleum pipelines, aerospace and nuclear reactor. However, the service…

Abstract

Purpose

Corrosion is one of the common damage mechanisms in many engineering structures such as marine structures, petroleum pipelines, aerospace and nuclear reactor. However, the service performance of metal materials and structures is gradually degenerating with the increase of service life due to the rapid growth of corrosion damages. Thus, the coupled effects for corrosion damage in reliability analysis should be considered urgently. Then, the purpose of this paper is to develop the corrosion damage physical model and the corresponding reliability analysis methods, which consider the coupled effect of corrosion damage.

Design/methodology/approach

A failure physical model, considering the coupled effect of pitting growth, crack and crack propagation, is presented in this paper. Sequentially, the corrosion reliability with respect to pitting physical damage can be investigated. The presented pitting damage physical model is formulated as time-variant performance limit state functions, which include the crack transition, crack growth and fracture failure mechanics. The first-passage failure criterion is used to construct the corrosion reliability framework, involving in the pitting damage model with the increase of service life.

Findings

Results demonstrate that the multiplicative dimensional reduction (MDR) method behaves much better than FORM no matter in accuracy or efficiency. The proposed corrosion reliability method is applicable for dealing with the damage failure model of the structural pitting corrosion.

Originality/value

The MDR method is used to calculate the corrosion reliability index of a given structure with fewer function calls. Finally, an aeronautical metal material is used to demonstrate the efficiency and precision of the proposed corrosion reliability method when the failure physical model considering the coupled effects of mechanical stresses and corrosion environment is adopted.

Details

Anti-Corrosion Methods and Materials, vol. 66 no. 5
Type: Research Article
ISSN: 0003-5599

Keywords

Article
Publication date: 14 November 2008

B.N. Rao and Rajib Chowdhury

To develop a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry.

1811

Abstract

Purpose

To develop a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry.

Design/methodology/approach

High dimensional model representation (HDMR) is a general set of quantitative model assessment and analysis tools for capturing the high‐dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher order variable correlations are weak and if the response function is dominantly of additive nature, allowing the physical model to be captured by the first few lower order terms. But, if multiplicative nature of the response function is dominant then all right hand side components of HDMR must be used to be able to obtain the best result. However, if HDMR requires all components, which means 2N number of components, to get a desired accuracy, making the method very expensive in practice, then factorized HDMR (FHDMR) can be used. The component functions of FHDMR are determined by using the component functions of HDMR. This paper presents the formulation of FHDMR approximation of a multivariate limit state/performance function, which is dominantly of multiplicative nature. Given that conventional methods for reliability analysis are very computationally demanding, when applied in conjunction with complex finite element models. This study aims to assess how accurately and efficiently HDMR/FHDMR based approximation techniques can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which is recently applied to reliability analysis, is also demonstrated. Response surface is constructed using moving least squares interpolation formula by including constant, first‐order and second‐order terms of HDMR and FHDMR. Once the response surface form is defined, the failure probability can be obtained by statistical simulation.

Findings

Results of five numerical examples involving structural/solid‐mechanics/geo‐technical engineering problems indicate that the failure probability obtained using FHDMR approximation for the limit state/performance function of dominantly multiplicative in nature, provides significant accuracy when compared with the conventional Monte Carlo method, while requiring fewer original model simulations.

Originality/value

This is the first time where application of FHDMR concepts is explored in the field of reliability and system safety. Present computational approach is valuable to the practical modeling and design community, where user often suffers from the curse of dimensionality.

Details

Engineering Computations, vol. 25 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 1999

Hooman Estelami

Research in marketing indicates that consumers may be sensitive to the final digits of prices. For example, despite being substantively equivalent, a price such as $199 may create…

1981

Abstract

Research in marketing indicates that consumers may be sensitive to the final digits of prices. For example, despite being substantively equivalent, a price such as $199 may create more favorable price perceptions than $200. However, existing research has primarily focused on the effects of price endings in the context of uni‐dimensional prices – prices consisting of a single number. Advertised prices in the marketplace are often multi‐dimensional, consisting of numerous price dimensions. In such pricing contexts, price endings may influence consumers’ ability to conduct the arithmetic required to compute the total advertised price. Examines the effect of various price ending strategies on consumers’ computational efforts. The findings indicate that the more commonly exercised price ending strategies tend to result in prices that are the most difficult for consumers to evaluate.

Details

Journal of Product & Brand Management, vol. 8 no. 3
Type: Research Article
ISSN: 1061-0421

Keywords

Book part
Publication date: 10 October 2017

Suman Seth and Sabina Alkire

A number of multidimensional poverty measures that respect the ordinal nature of dimensions have recently been proposed within the counting approach framework. Besides ensuring a…

Abstract

A number of multidimensional poverty measures that respect the ordinal nature of dimensions have recently been proposed within the counting approach framework. Besides ensuring a reduction in poverty, however, it is important to monitor distributional changes to ensure that poverty reduction has been inclusive in reaching the poorest. Distributional issues are typically captured by adjusting a poverty measure to be sensitive to inequality among the poor. This approach, however, has certain practical and conceptual limitations. It conflicts, for example, with some policy-relevant measurement features, such as the ability to decompose a measure into dimensions post-identification and does not create an appropriate framework for assessing disparity in poverty across population subgroups. In this chapter, we propose and justify the use of a separate decomposable inequality measure – a positive multiple of “variance” – to capture the distribution of deprivations among the poor and to assess disparity in poverty across population subgroups. We demonstrate the applicability of our approach through two contrasting inter-temporal illustrations using Demographic Health Survey data sets for Haiti and India.

Article
Publication date: 18 July 2008

F.H. Bellamine and A. Elkamel

This paper seeks to present a novel computational intelligence technique to generate concise neural network models for distributed dynamic systems.

Abstract

Purpose

This paper seeks to present a novel computational intelligence technique to generate concise neural network models for distributed dynamic systems.

Design/methodology/approach

The approach used in this paper is based on artificial neural network architectures that incorporate linear and nonlinear principal component analysis, combined with generalized dimensional analysis.

Findings

Neural network principal component analysis coupled with generalized dimensional analysis reduces input variable space by about 90 percent in the modeling of oil reservoirs. Once trained, the computation time is negligible and orders of magnitude faster than any traditional discretisation schemes such as fine‐mesh finite difference.

Practical implications

Finding the minimum number of input independent variables needed to characterize a system helps in extracting general rules about its behavior, and allows for quick setting of design guidelines, and particularly when evaluating changes in the physical properties of systems.

Originality/value

The methodology can be used to simulate dynamical systems characterized by differential equations, in an interactive CAD and optimization providing faster on‐line solutions and speeding up design guidelines.

Details

Engineering Computations, vol. 25 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 December 2021

Praveen Kumar Lendale and N.M. Nandhitha

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many…

Abstract

Purpose

Speckle noise removal in ultrasound images is one of the important tasks in biomedical-imaging applications. Many filtering -based despeckling methods are discussed in many existing works. Two-dimensional (2-D) transforms are also used enormously for the reduction of speckle noise in ultrasound medical images. In recent years, many soft computing-based intelligent techniques have been applied to noise removal and segmentation techniques. However, there is a requirement to improve the accuracy of despeckling using hybrid approaches.

Design/methodology/approach

The work focuses on double-bank anatomy with framelet transform combined with Gaussian filter (GF) and also consists of a fuzzy kind of clustering approach for despeckling ultrasound medical images. The presented transform efficiently rejects the speckle noise based on the gray scale relative thresholding where the directional filter group (DFB) preserves the edge information.

Findings

The proposed approach is evaluated by different performance indicators such as the mean square error (MSE), peak signal to noise ratio (PSNR) speckle suppression index (SSI), mean structural similarity and the edge preservation index (EPI) accordingly. It is found that the proposed methodology is superior in terms of all the above performance indicators.

Originality/value

Fuzzy kind clustering methods have been proved to be better than the conventional threshold methods for noise dismissal. The algorithm gives a reconcilable development as compared to other modern speckle reduction procedures, as it preserves the geometric features even after the noise dismissal.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 March 1997

Paul Steinmann, Peter Betsch and Erwin Stein

The objective of this work is to develop an element technology to recover the plane stress response without any plane stress specific modifications in the large strain regime…

1138

Abstract

The objective of this work is to develop an element technology to recover the plane stress response without any plane stress specific modifications in the large strain regime. Therefore, the essential feature of the proposed element formulation is an interface to arbitrary three‐dimensional constitutive laws. The easily implemented and computational cheap four‐noded element is characterized by coarse mesh accuracy and the satisfaction of the plane stress constraint in a weak sense. A number of example problems involving arbitrary small and large strain constitutive models demonstrate the excellent performance of the concept pursued in this work.

Details

Engineering Computations, vol. 14 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Content available
Book part
Publication date: 15 April 2020

Abstract

Details

Essays in Honor of Cheng Hsiao
Type: Book
ISBN: 978-1-78973-958-9

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1205

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 27 July 2012

Shi‐Woei Lin and Ming‐Tsang Lu

Methods and techniques of aggregating preferences or priorities in the analytic hierarchy process (AHP) usually ignore variation or dispersion among experts and are vulnerable to…

Abstract

Purpose

Methods and techniques of aggregating preferences or priorities in the analytic hierarchy process (AHP) usually ignore variation or dispersion among experts and are vulnerable to extreme values (generated by particular viewpoints or experts trying to distort the final ranking). The purpose of this paper is to propose a modelling approach and a graphical representation to characterize inconsistency and disagreement in the group decision making in the AHP.

Design/methodology/approach

The authors apply a regression approach for estimating the decision weights of the AHP using linear mixed models (LMM). They also test the linear mixed model and the multi‐dimensional scaling graphical display using a case of strategic performance management in education.

Findings

In addition to determining the weight vectors, this model also allows the authors to decompose the variation or uncertainty in experts' judgment. Well‐known statistical theories can estimate and rigorously test disagreement among experts, the residual uncertainty due to rounding errors in AHP scale, and the inconsistency within individual experts' judgments. Other than characterizing different sources of uncertainty, this model allows the authors to rigorously test other factors that might significantly affect weight assessments.

Originality/value

This study provides a model to better characterize different sources of uncertainty. This approach can improve decision quality by allowing analysts to view the aggregated judgments in a proper context and pinpoint the uncertain component that significantly affects decisions.

1 – 10 of 227