Search results

1 – 10 of over 27000
Open Access
Article
Publication date: 7 May 2019

Yanan Wang, Jianqiang Li, Sun Hongbo, Yuan Li, Faheem Akhtar and Azhar Imran

Simulation is a well-known technique for using computers to imitate or simulate the operations of various kinds of real-world facilities or processes. The facility or process of…

1579

Abstract

Purpose

Simulation is a well-known technique for using computers to imitate or simulate the operations of various kinds of real-world facilities or processes. The facility or process of interest is usually called a system, and to study it scientifically, we often have to make a set of assumptions about how it works. These assumptions, which usually take the form of mathematical or logical relationships, constitute a model that is used to gain some understanding of how the corresponding system behaves, and the quality of these understandings essentially depends on the credibility of given assumptions or models, known as VV&A (verification, validation and accreditation). The main purpose of this paper is to present an in-depth theoretical review and analysis for the application of VV&A in large-scale simulations.

Design/methodology/approach

After summarizing the VV&A of related research studies, the standards, frameworks, techniques, methods and tools have been discussed according to the characteristics of large-scale simulations (such as crowd network simulations).

Findings

The contributions of this paper will be useful for both academics and practitioners for formulating VV&A in large-scale simulations (such as crowd network simulations).

Originality/value

This paper will help researchers to provide support of a recommendation for formulating VV&A in large-scale simulations (such as crowd network simulations).

Details

International Journal of Crowd Science, vol. 3 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 16 April 2018

Beichuan Yan and Richard Regueiro

The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel computing on…

204

Abstract

Purpose

The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel computing on department of defense (DoD) supercomputers and to study the mechanical response of particle assemblies composed of a large number of particles in engineering practice and laboratory tests.

Design/methodology/approach

Parallel algorithm is designed and implemented with advanced features such as link-block, border layer and migration layer, adaptive compute gridding technique and message passing interface (MPI) transmission of C++ objects and pointers, for high performance optimization; performance analyses are conducted across five orders of magnitude of simulation scale on multiple DoD supercomputers; and three full-scale simulations of sand pluviation, constrained collapse and particle shape effect are carried out to study mechanical response of particle assemblies.

Findings

The parallel algorithm and implementation exhibit high speedup and excellent scalability, communication time is a decreasing function of the number of compute nodes and optimal computational granularity for each simulation scale is given. Nearly 50 per cent of wall clock time is spent on rebound phenomenon at the top of particle assembly in dynamic simulation of sand gravitational pluviation. Numerous particles are necessary to capture the pattern and shape of particle assembly in collapse tests; preliminary comparison between sphere assembly and ellipsoid assembly indicates a significant influence of particle shape on kinematic, kinetic and static behavior of particle assemblies.

Originality/value

The high-performance parallel code enables the simulation of a wide range of dynamic and static laboratory and field tests in engineering applications that involve a large number of granular and geotechnical material grains, such as sand pluviation process, buried explosion in various soils, earth penetrator interaction with soil, influence of grain size, shape and gradation on packing density and shear strength and mechanical behavior under different gravity environments such as on the Moon and Mars.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 October 2015

Ming Xia

The purpose of this paper is to present an upscale theory of the thermal-mechanical coupling particle simulation for non-isothermal problems in two-dimensional quasi-static…

Abstract

Purpose

The purpose of this paper is to present an upscale theory of the thermal-mechanical coupling particle simulation for non-isothermal problems in two-dimensional quasi-static system, under which a small length-scale particle model can exactly reproduce the same mechanical and thermal results with that of a large length-scale one.

Design/methodology/approach

The objective is achieved by extending the upscale theory of particle simulation for two-dimensional quasi-static problems from an isothermal system to a non-isothermal one.

Findings

Five similarity criteria, namely geometric, material (mechanical and thermal) properties, gravity acceleration, (mechanical and thermal) time steps, thermal initial and boundary conditions (Dirichlet/Neumann boundary conditions), under which a small-length-scale particle model can exactly reproduce both the mechanical and thermal behavior with that of a large length-scale model for non-isothermal problems in a two-dimensional quasi-static system are proposed. Furthermore, to test the proposed upscale theory, two typical examples subjected to different thermal boundary conditions are simulated using two particle models of different length scale.

Originality/value

The paper provides some important theoretical guidances to modeling thermal-mechanical coupled problems at both the engineering length scale (i.e. the meter scale) and the geological length scale (i.e. the kilometer scale) using the particle simulation method directly. The related simulation results from two typical examples of significantly different length scales (i.e. a meter scale and a kilometer scale) have demonstrated the usefulness and correctness of the proposed upscale theory for simulating non-isothermal problems in two-dimensional quasi-static system.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1203

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 April 2014

Weiwei Zhang, Xianlong Jin and Zhihao Yang

The great magnitude differences between the integral tunnel and its structure details make it impossible to numerically model and analyze the global and local seismic behavior of…

Abstract

Purpose

The great magnitude differences between the integral tunnel and its structure details make it impossible to numerically model and analyze the global and local seismic behavior of large-scale shield tunnels using a unified spatial scale, even with the help of supercomputers. The paper aims to present a combined equivalent & multi-scale simulation method, by which the tunnel's major mechanical properties under seismic loads can be represented by the equivalent model, and the seismic responses of the interested details can be studied efficiently by the coupled multi-scale model.

Design/methodology/approach

The nominal orthotropic material constants of the equivalent tunnel model are inversely determined by fitting the modal characteristics of the equivalent model with the corresponding segmental lining model. The critical sections are selected by comprehensive analyzing of the integral compression/extension and bending loads in the equivalent lining under the seismic shaking and the coupled multi-scale model containing the details of interest is solved by the mixed time explicit integration algorithm.

Findings

The combined equivalent & multi-scale simulation method is an effective and efficient way for seismic analyses of large-scale tunnels. The response of each flexible joint is related to its polar location on the lining ring, and the mixed time integration method can speed-up the calculation process for hybrid FE model with great differences in element sizes.

Originality/value

The orthotropic equivalent assumption is, to the best of the authors’ knowledge, for the first time, used in the 3D simulation of the shield tunnel lining, representing the rigidity discrepancies caused by the structural property.

Details

Engineering Computations, vol. 31 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 6 June 2018

Sun Hongbo and Mi Zhang

As main mode of modern service industry and future economy society, the research on crowd network can greatly facilitate governances of economy society and make it more efficient…

Abstract

Purpose

As main mode of modern service industry and future economy society, the research on crowd network can greatly facilitate governances of economy society and make it more efficient, humane, sustainable and at the same time avoid disorders. However, because most results cannot be observed in real world, the research of crowd network cannot follow a traditional way. Simulation is the main means to put forward related research studies. Compared with other large-scale interactive simulations, simulation for crowd network has challenges of dynamic, diversification and massive participants. Fortunately, known as the most famous and widely accepted standard, high level architecture (HLA) has been widely used in large-scale simulations. But when it comes to crowd network, HLA has shortcomings like fixed federation, limited scale and agreement outside the software system.

Design/methodology/approach

This paper proposes a novel reflective memory-based framework for crowd network simulations. The proposed framework adopts a two-level federation-based architecture, which separates simulation-related environments into physical and logical aspect to enhance the flexibility of simulations. Simulation definition is introduced in this architecture to resolve the problem of outside agreements and share resources pool (constructed by reflective memory) is used to address the systemic emergence and scale problem.

Findings

With reference to HLA, this paper proposes a novel reflective memory-based framework toward crowd network simulations. The proposed framework adopts a two-level federation-based architecture, system-level simulation (system federation) and application-level simulation (application federations), which separates simulation-related environments into physical and logical aspect to enhance the flexibility of simulations. Simulation definition is introduced in this architecture to resolve the problem of outside agreements and share resources pool (constructed by reflective memory) is used to address the systemic emergence and scale problem.

Originality/value

Simulation syntax and semantic are all settled under this framework by templates, especially interface templates, as simulations are separated by two-level federations, physical and logical simulation environment are considered separately; the definition of simulation execution is flexible. When developing new simulations, recompile is not necessary, which can acquire much more reusability, because reflective memory is adopted as share memory within given simulation execution in this framework; population can be perceived by all federates, which greatly enhances the scalability of this kind of simulations; communication efficiency and capability has greatly improved by this share memory-based framework.

Details

International Journal of Crowd Science, vol. 2 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 28 April 2020

Jialin Zou, Kun Wang and Hongbo Sun

Crowd network systems have been deemed as a promising mode of modern service industry and future economic society, and taking crowd network as the research object and exploring…

Abstract

Purpose

Crowd network systems have been deemed as a promising mode of modern service industry and future economic society, and taking crowd network as the research object and exploring its operation mechanism and laws is of great significance for realizing the effective governance of the government and the rapid development of economy, avoiding social chaos and mutation. Because crowd network is a large-scale, dynamic and diversified online deep interconnection, its most results cannot be observed in real world, and it cannot be carried out in accordance with traditional way, simulation is of great importance to put forward related research. To solve above problems, this paper aims to propose a simulation architecture based on the characteristics of crowd network and to verify the feasibility of this architecture through a simulation example.

Design/methodology/approach

This paper adopts a data-driven architecture by deeply analyzing existing large-scale simulation architectures and proposes a novel reflective memory-based architecture for crowd network simulations. In this paper, the architecture is analyzed from three aspects: implementation framework, functional architecture and implementation architecture. The proposed architecture adopts a general structure to decouple related work in a harmonious way and gets support for reflection storage by connecting to different devices via reflection memory card. Several toolkits for system implementation are designed and connected by data-driven files (DDF), and these XML files constitute a persistent storage layer. To improve the credibility of simulations, VV&A (verification, validation and accreditation) is introduced into the architecture to verify the accuracy of simulation system executions.

Findings

Implementation framework introduces the scenes, methods and toolkits involved in the whole simulation architecture construction process. Functional architecture adopts a general structure to decouple related work in a harmonious way. In the implementation architecture, several toolkits for system implementation are designed, which are connected by DDF, and these XML files constitute a persistent storage layer. Crowd network simulations obtain the support of reflective memory by connecting the reflective memory cards on different devices and connect the interfaces of relevant simulation software to complete the corresponding function call. Meanwhile, to improve the credibility of simulations, VV&A is introduced into the architecture to verify the accuracy of simulation system executions.

Originality/value

This paper proposes a novel reflective memory-based architecture for crowd network simulations. Reflective memory is adopted as share memory within given simulation execution in this architecture; communication efficiency and capability have greatly improved by this share memory-based architecture. This paper adopts a data-driven architecture; the architecture mainly relies on XML files to drive the entire simulation process, and XML files have strong readability and do not need special software to read.

Details

International Journal of Crowd Science, vol. 4 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Book part
Publication date: 8 November 2010

Marcel A.L.M. van Assen

The present study increases our understanding of strong power in exchange networks by examining its incidence in complex networks for the first time and relating this incidence to…

Abstract

The present study increases our understanding of strong power in exchange networks by examining its incidence in complex networks for the first time and relating this incidence to characteristics of these networks. A theoretical analysis based on network exchange theory (e.g., Willer, 1999) suggests two network characteristics predicting strong power; actors with only one potential exchange partner, and the absence of triangles, that is, one's potential exchange partners are not each other's partners. Different large-scale structures such as trees, small worlds, buyer–seller, uniform, and scale-free networks are shown to differ in these two characteristics and are therefore predicted to differ with respect to the incidence of strong power. The theoretical results and those obtained by simulating networks up to size 144 show that the incidence of strong power mainly depends on the density of the network. For high density no strong power is observed in all but buyer–seller networks, whereas for low density strong power is frequent but dependent on the large-scale structure and the two aforementioned network characteristics.

Details

Advances in Group Processes
Type: Book
ISBN: 978-0-85724-329-4

Keywords

Article
Publication date: 1 February 1996

Jaroslav Mackerle

Presents a review on implementing finite element methods on supercomputers, workstations and PCs and gives main trends in hardware and software developments. An appendix included…

Abstract

Presents a review on implementing finite element methods on supercomputers, workstations and PCs and gives main trends in hardware and software developments. An appendix included at the end of the paper presents a bibliography on the subjects retrospectively to 1985 and approximately 1,100 references are listed.

Details

Engineering Computations, vol. 13 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 November 2016

Diogo Tenório Cintra, Ramiro Brito Willmersdorf, Paulo Roberto Maciel Lyra and William Wagner Matos Lira

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space…

Abstract

Purpose

The purpose of this paper is to present a methodology for parallel simulation that employs the discrete element method (DEM) and improves the cache performance using Hilbert space filling curves (HSFC).

Design/methodology/approach

The methodology is well suited for large-scale engineering simulations and considers modelling restrictions due to memory limitations related to the problem size. An algorithm based on mapping indexes, which does not use excessive additional memory, is adopted to enable the contact search procedure for highly scattered domains. The parallel solution strategy uses the recursive coordinate bisection method in the dynamical load balancing procedure. The proposed memory access control aims to improve the data locality of a dynamic set of particles. The numerical simulations presented here contain up to 7.8 millions of particles, considering a visco-elastic model of contact and a rolling friction assumption.

Findings

A real landslide is adopted as reference to evaluate the numerical approach. Three-dimensional simulations are compared in terms of the deposition pattern of the Shum Wan Road landslide. The results show that the methodology permits the simulation of models with a good control of load balancing and memory access. The improvement in cache performance significantly reduces the processing time for large-scale models.

Originality/value

The proposed approach allows the application of DEM in several practical engineering problems of large scale. It also introduces the use of HSFC in the optimization of memory access for DEM simulations.

Details

Engineering Computations, vol. 33 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 27000