Search results
1 – 10 of over 6000
Christian Wellmann, Claudia Lillie and Peter Wriggers
The paper aims to introduce an efficient contact detection algorithm for smooth convex particles.
Abstract
Purpose
The paper aims to introduce an efficient contact detection algorithm for smooth convex particles.
Design/methodology/approach
The contact points of adjacent particles are defined according to the common‐normal concept. The problem of contact detection is formulated as 2D unconstrained optimization problem that is solved by a combination of Newton's method and a Levenberg‐Marquardt method.
Findings
The contact detection algorithm is efficient in terms of the number of iterations required to reach a high accuracy. In the case of non‐penetrating particles, a penetration can be ruled out in the course of the iterative solution before convergence is reached.
Research limitations/implications
The algorithm is only applicable to smooth convex particles, where a bijective relation between the surface points and the surface normals exists.
Originality/value
By a new kind of formulation, the problem of contact detection between 3D particles can be reduced to a 2D unconstrained optimization problem. This formulation enables fast contact exclusions in the case of non‐penetrating particles.
Details
Keywords
M.A. Gutierrez, Y.M. Ojanguren and J.J. Anza
The numerical simulation of metal forming processes approximated by means of finite element techniques, require large computational effort, which contradicts the need of…
Abstract
The numerical simulation of metal forming processes approximated by means of finite element techniques, require large computational effort, which contradicts the need of interactivity for industrial applications. This work analyses the computational efficiency of algorithms combining elastoplasticity with finite deformation and contact mechanics, and in particular, the optimum solution of the linear systems to be solved through the incremental‐iterative schemes associated with non linear implicit analysis. A method based on domain decomposition techniques especially adapted to contact problems is presented, as well as the improved performance obtained in the application to hot rolling simulation, as a consequence of bandwidth reduction and the differentiated treatment of subdomains along the non linear analysis.
Details
Keywords
Dimos C. Charmpis and Manolis Papadrakakis
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational mechanics…
Abstract
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational mechanics. Such DDMs are used in practice on parallel computing environments with the number of generated subdomains being generally larger than the number of available processors. This paper presents an effective heuristic technique for organizing the subdomains into subdomain clusters, in order to assign each cluster to a processor. This task is handled by the proposed approach as a graph partitioning optimization problem using the publicly available software METIS. The objective of the optimization process is to minimize the communication requirements of the DDMs under the constraint of producing balanced processor workloads. This constraint optimization procedure for treating the subdomain cluster generation task leads to increased computational efficiencies for balancing and dual DDMs.
Details
Keywords
Andro Rak, Luka Grbčić, Ante Sikirica and Lado Kranjčević
The purpose of this paper is the examination of fluid flow around NACA0012 airfoil, with the aim of the numerical validation between the experimental results in the wind tunnel…
Abstract
Purpose
The purpose of this paper is the examination of fluid flow around NACA0012 airfoil, with the aim of the numerical validation between the experimental results in the wind tunnel and the Lattice Boltzmann method (LBM) analysis, for the medium Reynolds number (Re = 191,000). The LBM–large Eddy simulation (LES) method described in this paper opens up opportunities for faster computational fluid dynamics (CFD) analysis, because of the LBM scalability on high performance computing architectures, more specifically general purpose graphics processing units (GPGPUs), pertaining at the same time the high resolution LES approach.
Design/methodology/approach
Process starts with data collection in open-circuit wind tunnel experiment. Furthermore, the pressure coefficient, as a comparative variable, has been used with varying angle of attack (2°, 4°, 6° and 8°) for both experiment and LBM analysis. To numerically reproduce the experimental results, the LBM coupled with the LES turbulence model, the generalized wall function (GWF) and the cumulant collision operator with D3Q27 velocity set has been used. Also, a mesh independence study has been provided to ensure result congruence.
Findings
The proposed LBM methodology is capable of highly accurate predictions when compared with experimental data. Besides, the special significance of this work is the possibility of experimental and CFD comparison for the same domain dimensions.
Originality/value
Considering the quality of results, root-mean-square error (RMSE) shows good correlations both for airfoil’s upper and lower surface. More precisely, maximal RMSE for the upper surface is 0.105, whereas 0.089 for the lower surface, regarding all angles of attack.
Details
Keywords
Muhannad Aldosary, Jinsheng Wang and Chenfeng Li
This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…
Abstract
Purpose
This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.
Design/methodology/approach
This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.
Findings
Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.
Originality/value
The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.
Details
Keywords
Tiago Oliveira, Wilber Vélez and Artur Portela
This paper is concerned with new formulations of local meshfree and finite element numerical methods, for the solution of two-dimensional problems in linear elasticity.
Abstract
Purpose
This paper is concerned with new formulations of local meshfree and finite element numerical methods, for the solution of two-dimensional problems in linear elasticity.
Design/methodology/approach
In the local domain, assigned to each node of a discretization, the work theorem establishes an energy relationship between a statically admissible stress field and an independent kinematically admissible strain field. This relationship, derived as a weighted residual weak form, is expressed as an integral local form. Based on the independence of the stress and strain fields, this local form of the work theorem is kinematically formulated with a simple rigid-body displacement to be applied by local meshfree and finite element numerical methods. The main feature of this paper is the use of a linearly integrated local form that implements a quite simple algorithm with no further integration required.
Findings
The reduced integration, performed by this linearly integrated formulation, plays a key role in the behavior of local numerical methods, since it implies a reduction of the nodal stiffness which, in turn, leads to an increase of the solution accuracy and, which is most important, presents no instabilities, unlike nodal integration methods without stabilization. As a consequence of using such a convenient linearly integrated local form, the derived meshfree and finite element numerical methods become fast and accurate, which is a feature of paramount importance, as far as computational efficiency of numerical methods is concerned. Three benchmark problems were analyzed with these techniques, in order to assess the accuracy and efficiency of the new integrated local formulations of meshfree and finite element numerical methods. The results obtained in this work are in perfect agreement with those of the available analytical solutions and, furthermore, outperform the computational efficiency of other methods. Thus, the accuracy and efficiency of the local numerical methods presented in this paper make this a very reliable and robust formulation.
Originality/value
Presentation of a new local mesh-free numerical method. The method, linearly integrated along the boundary of the local domain, implements an algorithm with no further integration required. The method is absolutely reliable, with remarkably-accurate results. The method is quite robust, with extremely-fast computations.
Details
Keywords
Erwin Stein and Karin Wiechmann
First, a synopsis of the major changes of natural science, mathematics and philosophy within the 17th century shall highlight the birth of the new age of science and technology…
Abstract
First, a synopsis of the major changes of natural science, mathematics and philosophy within the 17th century shall highlight the birth of the new age of science and technology. Based on Fermat's principle of the shortest light‐way and Galilei's first attempt of an approximative solution of the so‐called Brachistochrone problem using a quarter of the circle, Johann Bernoulli published a competition for this problem in 1696, and six solutions were submitted by the most famous scientists of the time and published in 1697, even though the variational calculus was only published in 1744 by Euler for the first time. Especially the analytical solution of Jakob Bernoulli contains already the main idea of Euler's variational calculus, i.e. to vary only one function value at a time using a finite difference method and proceeding to the infinitesimal limit. Also Leibniz' geometric solution is very remarkable, realizing a direct discrete variational method geometrically which was invented numerically much later in the 19th century by Ritz and Galerkin and generalized to the finite element method by introducing test and trial functions in finite subspaces. A new finite element solution of the non‐linear Brachistochrone problem concludes the paper. It is important to recognize that besides the roots of variational calculus also the first formulations of conservation laws in mechanics and their applications originated in the 17th century.
Details
Keywords
Ge Gao, Yaobin Li, Hui Pan, Limin Chen and Zhenyu Liu
The purpose of this paper is to provide an effective members-adding method for truss topology optimization in plastic design.
Abstract
Purpose
The purpose of this paper is to provide an effective members-adding method for truss topology optimization in plastic design.
Design/methodology/approach
With the help of the distribution of principal stress trajectories, obtained by finite element analysis of the design domain, ineffective zones for force transmission paths can be found, namely, areas whose nodes may have ersatz nodal displacements. Members connected by these nodes are eliminated and the reduced ground structure is used for optimization. Adding members in short to long order and limiting the number of members properly with the most strained ones added, large-scale truss problems in one load case and multiple-load cases are optimized.
Findings
Inefficient members (i.e. bars that fulfil the adding criterion but make no contribution to the optimal structure) added to the ground structure in each iterative step are reduced. Fewer members are used for optimization than before; therefore, faster solution convergence and less computation time are achieved with the optimized result unchanged.
Originality/value
The proposed members-adding method in the paper can alleviate the phenomenon of ersatz nodal displacements, enhance computational efficiency and save calculating resources effectively.
Details