Search results1 – 10 of 715
Presents a review on implementing finite element methods on supercomputers, workstations and PCs and gives main trends in hardware and software developments. An appendix…
Presents a review on implementing finite element methods on supercomputers, workstations and PCs and gives main trends in hardware and software developments. An appendix included at the end of the paper presents a bibliography on the subjects retrospectively to 1985 and approximately 1,100 references are listed.
What ia a supercomputer? The term ‘supercomputer’ came into use around 1970 to describe systems which are significantly more powerful than the other machines of their generation. The difference in power has generally been achieved by pushing the limits of cost or practicality to the point where only a minority of users are able to justify their acquisition.
“During the past few years—roughly the first half of the 1980s—a growing number of scientists, engineers, and businessmen discovered that conventional computers are too limited, or too slow, to meet their needs…This led to increasing demand for access to supercomputers, the most powerful computers in existence…”
UNTIL recently supercomputers were regarded as very specialised and expensive computing resources. But thanks to the influence of the UNIX operating system and decreasing…
UNTIL recently supercomputers were regarded as very specialised and expensive computing resources. But thanks to the influence of the UNIX operating system and decreasing cost of hardware, supercomputers are becoming more affordable. Availability of low cost entry level models and greater adherence to Open Systems is allowing full interoperability within mixed vendor environments.
In response to provisions in Public Law 99–383, which was passed 21 June 1986 by the 99th Congress, an inter‐agency group under the auspices of the Federal Coordinating…
In response to provisions in Public Law 99–383, which was passed 21 June 1986 by the 99th Congress, an inter‐agency group under the auspices of the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET) for Computer Research and Applications was formed to study the following issues: the networking needs of the nation's academic and federal research computer programs, including supercomputer programs, over the next 15 years, addressing requirements in terms of volume of data, reliability of transmission, software compatibility, graphics capabilities, and transmission security; the benefits and opportunities that an improved computer network would offer for electronic mail, file transfer, and remote access and communications; and the networking options available for linking academic and research computers, including supercomputers, with a particular emphasis on fiber optics. Bell reports on the process and recommendations associated with the committee's work, and suggests a means for accomplishing the network objectives addressed by its report.
The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel…
The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel computing on department of defense (DoD) supercomputers and to study the mechanical response of particle assemblies composed of a large number of particles in engineering practice and laboratory tests.
Parallel algorithm is designed and implemented with advanced features such as link-block, border layer and migration layer, adaptive compute gridding technique and message passing interface (MPI) transmission of C++ objects and pointers, for high performance optimization; performance analyses are conducted across five orders of magnitude of simulation scale on multiple DoD supercomputers; and three full-scale simulations of sand pluviation, constrained collapse and particle shape effect are carried out to study mechanical response of particle assemblies.
The parallel algorithm and implementation exhibit high speedup and excellent scalability, communication time is a decreasing function of the number of compute nodes and optimal computational granularity for each simulation scale is given. Nearly 50 per cent of wall clock time is spent on rebound phenomenon at the top of particle assembly in dynamic simulation of sand gravitational pluviation. Numerous particles are necessary to capture the pattern and shape of particle assembly in collapse tests; preliminary comparison between sphere assembly and ellipsoid assembly indicates a significant influence of particle shape on kinematic, kinetic and static behavior of particle assemblies.
The high-performance parallel code enables the simulation of a wide range of dynamic and static laboratory and field tests in engineering applications that involve a large number of granular and geotechnical material grains, such as sand pluviation process, buried explosion in various soils, earth penetrator interaction with soil, influence of grain size, shape and gradation on packing density and shear strength and mechanical behavior under different gravity environments such as on the Moon and Mars.