Search results

1 – 10 of over 13000
Article
Publication date: 31 March 2023

Huseyin Saglik, Airong Chen and Rujin Ma

Beginners and even experienced ones have difficulties in completing the structural fire analysis due to numerical difficulties such as convergence errors and singularity and have…

Abstract

Purpose

Beginners and even experienced ones have difficulties in completing the structural fire analysis due to numerical difficulties such as convergence errors and singularity and have to spend a lot of time making many repetitive changes on the model. The aim of this article is to highlight the advantages of explicit solver which can eliminate the mentioned difficulties in finite element analysis containing highly nonlinear contacts, clearance between modeled parts at the beginning and large deflections because of high temperature. This article provides important information, especially for researchers and engineers who are new to structural fire analysis.

Design/methodology/approach

The finite element method is utilized to achieve mentioned purposes. First, a comparative study is conducted between implicit and explicit solvers by using Abaqus. Then, a validation process is carried out to illustrate the explicit process by using sequentially coupled heat transfer and structural analysis.

Findings

Explicit analysis offers an easier solution than implicit analysis for modeling multi-bolted connections under high temperatures. An optimum mesh density for bolted connections is presented to reflect the realistic structural behavior. Presented explicit process with the offered mesh density is used in the validation of an experimental study on multi-bolted splice connection under ISO 834 standard fire curve. A good agreement is achieved.

Originality/value

What makes the study valuable is that the points to be considered in the structural fire analysis are examined and it is a guide that future researchers can benefit from. This is especially true for modeling and analysis of multi-bolted connections in finite element software under high temperatures. The article can help to shorten and even eliminate the iterative debugging phases, which is a problematic and very time-consuming process for many researchers.

Details

Journal of Structural Fire Engineering, vol. 14 no. 4
Type: Research Article
ISSN: 2040-2317

Keywords

Article
Publication date: 16 August 2013

Max A.N. Hendriks and Jan G. Rots

The purpose of this paper is to review recent advances and current issues in the realm of sequentially linear analysis.

Abstract

Purpose

The purpose of this paper is to review recent advances and current issues in the realm of sequentially linear analysis.

Design/methodology/approach

Sequentially linear analysis is an alternative to non‐linear finite element analysis of structures when bifurcation, snap‐back or divergence problems arise. The incremental‐iterative procedure, adopted in nonlinear finite element analysis, is replaced by a sequence of scaled linear finite element analyses with decreasing secant stiffness, corresponding to local damage increments. The focus is on reinforced concrete structures, where multiple cracks initiate and compete to survive.

Findings

Compared to nonlinear smeared crack models in incremental‐iterative settings, the sequentially linear model is shown to be robust and effective in predicting localizations, crack spacing and crack width as well as brittle shear behavior. To date, sequentially linear analysis has not been devised with a proper crack closing algorithm. Besides, of utmost importance for many practical applications, sequentially linear analysis requires an improvement of the algorithm to deal with non‐proportional loadings.

Originality/value

This article gives an up‐to‐date research overview on the applicability of sequentially linear analysis. For the issue of non‐proportional loading, it indicates solution directions.

Details

Engineering Computations, vol. 30 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 12 August 2022

Gabriel W. Rodrigues, Fabiano L. Oliveira, llmar F. Santos and Marco L. Bittencourt

This paper aims to compare different dynamical models, cavitation procedures and numerical methods to simulate hydrodynamic lubricated bearings of internal combustion engines.

99

Abstract

Purpose

This paper aims to compare different dynamical models, cavitation procedures and numerical methods to simulate hydrodynamic lubricated bearings of internal combustion engines.

Design/methodology/approach

Two dynamical models are considered for the main bearing of combustion engines. The first is a fluid-structure interaction multi-body dynamics coupled with lubricated bearings, where the equilibrium and Reynolds equations are solved together. The second model finds the equilibrium position of the bearing subjected to previously calculated dynamical loads. The Traditional p-? procedure and Giacopini’s model described in Giacopini et al. (2010) are adopted for cavitation purposes. The influence of the finite difference and finite element numerical methods is investigated.

Findings

Simulations were carried out considering small-, mid- and large-sized engines and the dynamical models differed mainly in predicting the journal orbits. Finite element method with Giacopini’s cavitation model had improved numeric stability for the three engines.

Research limitations/implications

The dynamic models do not consider the flexibility of the components of the main mechanism of combustion engines which may overestimate the oil pressure and journal orbits.

Practical implications

It can help researchers and engineers to decide which combination of methods is best suited for their needs and the implications associated with each one.

Social implications

The used methods may help engineers to design better and more efficient combustion engines.

Originality/value

This paper helps practitioners to understand the effects of different methods on the results. Additionally, depending on the engine, one approach can be more effective than the other.

Details

Industrial Lubrication and Tribology, vol. 74 no. 9
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 4 January 2013

Shamsuddin Ahmed

The purpose of this paper is to present a degenerated simplex search method to optimize neural network error function. By repeatedly reflecting and expanding a simplex, the…

Abstract

Purpose

The purpose of this paper is to present a degenerated simplex search method to optimize neural network error function. By repeatedly reflecting and expanding a simplex, the centroid property of the simplex changes the location of the simplex vertices. The proposed algorithm selects the location of the centroid of a simplex as the possible minimum point of an artificial neural network (ANN) error function. The algorithm continually changes the shape of the simplex to move multiple directions in error function space. Each movement of the simplex in search space generates local minimum. Simulating the simplex geometry, the algorithm generates random vertices to train ANN error function. It is easy to solve problems in lower dimension. The algorithm is reliable and locates minimum function value at the early stage of training. It is appropriate for classification, forecasting and optimization problems.

Design/methodology/approach

Adding more neurons in ANN structure, the terrain of the error function becomes complex and the Hessian matrix of the error function tends to be positive semi‐definite. As a result, derivative based training method faces convergence difficulty. If the error function contains several local minimum or if the error surface is almost flat, then the algorithm faces convergence difficulty. The proposed algorithm is an alternate method in such case. This paper presents a non‐degenerate simplex training algorithm. It improves convergence by maintaining irregular shape of the simplex geometry during degenerated stage. A randomized simplex geometry is introduced to maintain irregular contour of a degenerated simplex during training.

Findings

Simulation results show that the new search is efficient and improves the function convergence. Classification and statistical time series problems in higher dimensions are solved. Experimental results show that the new algorithm (degenerated simplex algorithm, DSA) works better than the random simplex algorithm (RSM) and back propagation training method (BPM). Experimental results confirm algorithm's robust performance.

Research limitations/implications

The algorithm is expected to face convergence complexity for optimization problems in higher dimensions. Good quality suboptimal solution is available at the early stage of training and the locally optimized function value is not far off the global optimal solution, determined by the algorithm.

Practical implications

Traditional simplex faces convergence difficulty to train ANN error function since during training simplex can't maintain irregular shape to avoid degeneracy. Simplex size becomes extremely small. Hence convergence difficulty is common. Steps are taken to redefine simplex so that the algorithm avoids the local minimum. The proposed ANN training method is derivative free. There is no demand for first order or second order derivative information hence making it simple to train ANN error function.

Originality/value

The algorithm optimizes ANN error function, when the Hessian matrix of error function is ill conditioned. Since no derivative information is necessary, the algorithm is appealing for instances where it is hard to find derivative information. It is robust and is considered a benchmark algorithm for unknown optimization problems.

Article
Publication date: 1 February 1997

Amit Dutta and Donald W. White

In the inelastic stability analysis of plated structures, incremental‐iterative finite element methods sometimes encounter prohibitive solution difficulties in the vicinity of…

Abstract

In the inelastic stability analysis of plated structures, incremental‐iterative finite element methods sometimes encounter prohibitive solution difficulties in the vicinity of sharp limit points, branch points and other regions of abrupt non‐linearity. Presents an analysis system that attempts to trace the non‐linear response associated with these types of problems at minor computational cost. Proposes a semi‐heuristic method for automatic load incrementation, termed the adaptive arc‐length procedure. This procedure is capable of detecting abrupt non‐linearities and reducing the increment size prior to encountering iterative convergence difficulties. The adaptive arc‐length method is also capable of increasing the increment size rapidly in regions of near linear response. This strategy, combined with consistent linearization to obtain the updated tangent stiffness matrix in all iterative steps, and with the use of a “minimum residual displacement” constraint on the iterations, is found to be effective in avoiding solution difficulties in many types of severe non‐linear problems. However, additional procedures are necessary to negotiate branch points within the solution path, as well as to ameliorate convergence difficulties in certain situations. Presents a special algorithm, termed the bifurcation processor, which is effective for solving many of these types of problems. Discusses several example solutions to illustrate the performance of the resulting analysis system.

Details

Engineering Computations, vol. 14 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 January 2010

R. Rossi and E. Oñate

The purpose of this paper is to analyse algorithms for fluid‐structure interaction (FSI) from a purely algorithmic point of view.

Abstract

Purpose

The purpose of this paper is to analyse algorithms for fluid‐structure interaction (FSI) from a purely algorithmic point of view.

Design/methodology/approach

First of all a 1D model problem is selected, for which both the fluid and structural behavior are represented through a minimum number of parameters. Different coupling algorithm and time integration schemes are then applied to the simplified model problem and their properties are discussed depending on the values assumed by the parameters. Both exact and approximate time integration schemes are considered in the same framework so to allow an assessment of the different sources of error.

Findings

The properties of staggered coupling schemes are confirmed. An insight on the convergence behavior of iterative coupling schemes is provided. A technique to improve such convergence is then discussed.

Research limitations/implications

All the results are proved for a given family of time integration schemes. The technique proposed can be applied to other families of time integration techniques, but some of the analytical results need to be reworked under this assumption.

Practical implications

The problems that are commonly encountered in FSI can be justified by simple arguments. It can also be shown that the limit at which trivial iterative schemes experience convergence difficulties is very close to that at which staggered schemes become unstable.

Originality/value

All the results shown are based on simple mathematics. The problems are presented so to be independent of the particular choice for the solution of the fluid flow.

Details

Engineering Computations, vol. 27 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 March 1998

Christine M. Abbott

Organisational convergence, and convergence of the technologies which underlie information delivery and management, place new demands upon librarians to extend their skills and…

1817

Abstract

Organisational convergence, and convergence of the technologies which underlie information delivery and management, place new demands upon librarians to extend their skills and knowledge. Significant opportunities exist for staff who are able to combine their professional task skills with good process skills. Convergence poses difficulties for managers of converged services in ensuring that all groups of library staff have the same opportunities to develop hybrid skills; and in finding training products suitable for staff working in a converged environment. It is anticipated that experience of working in converged services will enhance the career prospects of staff at all levels.

Details

Librarian Career Development, vol. 6 no. 3
Type: Research Article
ISSN: 0968-0810

Keywords

Article
Publication date: 7 October 2013

M. Vaz Jr, E.L. Cardoso and J. Stahlschmidt

Parameter identification is a technique which aims at determining material or other process parameters based on a combination of experimental and numerical techniques. In recent…

Abstract

Purpose

Parameter identification is a technique which aims at determining material or other process parameters based on a combination of experimental and numerical techniques. In recent years, heuristic approaches, such as genetic algorithms (GAs), have been proposed as possible alternatives to classical identification procedures. The present work shows that particle swarm optimization (PSO), as an example of such methods, is also appropriate to identification of inelastic parameters. The paper aims to discuss these issues.

Design/methodology/approach

PSO is a class of swarm intelligence algorithms which attempts to reproduce the social behaviour of a generic population. In parameter identification, each individual particle is associated to hyper-coordinates in the search space, corresponding to a set of material parameters, upon which velocity operators with random components are applied, leading the particles to cluster together at convergence.

Findings

PSO has proved to be a viable alternative to identification of inelastic parameters owing to its robustness (achieving the global minimum with high tolerance for variations of the population size and control parameters), and, contrasting to GAs, higher convergence rate and small number of control variables.

Originality/value

PSO has been mostly applied to electrical and industrial engineering. This paper extends the field of application of the method to identification of inelastic material parameters.

Details

Engineering Computations, vol. 30 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Abstract

Details

Understanding Intercultural Interaction: An Analysis of Key Concepts, 2nd Edition
Type: Book
ISBN: 978-1-83753-438-8

Article
Publication date: 10 August 2010

Shamsuddin Ahmed

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to identify…

Abstract

Purpose

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to identify initial solutions in artificial neural network (ANN) training.

Design/methodology/approach

A multi‐directional ANN training algorithm that needs no derivative information is introduced as constrained one‐dimensional problem. A directional search vector examines the ANN error function in weight parameter space. The search vector moves in all possible directions to find minimum function value. The network weights are increased or decreased depending on the shape of the error function hyper surface such that the search vector finds descent directions. The minimum function value is thus determined. To accelerate the convergence of the algorithm a momentum search is designed. It avoids overshooting the local minimum.

Findings

The training algorithm is insensitive to the initial starting weights in comparison with the gradient‐based methods. Therefore, it can locate a relative local minimum from anywhere of the error surface. It is an important property of this training method. The algorithm is suitable for error functions that are discontinuous, ill conditioned or the derivative of the error function is not readily available. It improves over the standard back propagation method in convergence and avoids premature termination near pseudo local minimum.

Research limitations/implications

Classifications problems are efficiently classified when using this method but the complex time series in some instances slows convergence due to complexity of the error surface. Different ANN network structure can further be investigated to find the performance of the algorithm.

Practical implications

The search scheme moves along the valleys and ridges of the error function to trace minimum neighborhood. The algorithm only evaluates the error function. As soon as the algorithm detects flat surface of the error function, care is taken to avoid slow convergence.

Originality/value

The algorithm is efficient due to incorporation of three important methodologies. The first mechanism is the momentum search. The second methodology is the implementation of directional search vector in coordinate directions. The third procedure is the one‐dimensional search in constrained region to identify the self‐adaptive learning rates, to improve convergence.

Details

Kybernetes, vol. 39 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 13000