Search results

1 – 10 of over 25000
Article
Publication date: 1 August 1997

A. Macfarlane, S.E. Robertson and J.A. Mccann

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text…

Abstract

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text retrieval. We analyse parallel IR systems using a classification defined by Rasmussen and describe some parallel IR systems. We give a description of the retrieval models used in parallel information processing. We describe areas of research which we believe are needed.

Details

Journal of Documentation, vol. 53 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 2 March 2015

Yiwen Bian, Miao Hu and Hao Xu

The purpose of this paper is to measure the efficiencies of parallel subsystems with shared inputs/outputs. Each subsystem has not only a set of common inputs and outputs, but…

Abstract

Purpose

The purpose of this paper is to measure the efficiencies of parallel subsystems with shared inputs/outputs. Each subsystem has not only a set of common inputs and outputs, but also some dedicated inputs and outputs as well as some shared inputs and outputs. A more general data envelopment analysis (DEA) approach is proposed to deal with this efficiency evaluation issue. Based on the proposed approach, mechanisms for shared inputs/outputs distribution and efficiency decomposition among sub-units are presented.

Design/methodology/approach

To evaluate the efficiency of the parallel systems, this paper proposes a centralized DEA approach by assuming that the same input/output factor in a decision-making unit (DMU) has the same multiplier for all its sub-units. Furthermore, different proportions of shared inputs/outputs are imposed on sub-units within different DMUs in evaluating each DMU’s efficiency. The proposed approach is applied to evaluate the operational efficiencies of 18 railway firms in China.

Findings

By using the proposed DEA approach, the efficiencies of the whole DMU and its sub-units can be measured at the same time, and the optimal allocation strategy of shared inputs/outputs can also be obtained. The proposed model is more reasonable and robust for measuring the operational performance of parallel systems with shared inputs and outputs. The efficiency of railway system in China is relatively low, and its inefficiency is largely caused by lower freight transportation performance. Great disparities among firms can be found in the passenger transportation efficiency and freight transportation efficiency.

Research limitations/implications

This study develops the DEA model under the assumption of constant returns to scale, which can be directly extended to a situation with variable returns to scale.

Practical implications

In this paper, the proposed approach is a more effective way to evaluate the efficiencies of parallel systems with shared inputs/outputs. With respect to the application, to improve the overall efficiency of China’s railway system, more efforts should be taken to improve its operational performance of freight transportation. Furthermore, firms’ disparities should also be considered when making these related policies.

Originality/value

The proposed approach can evaluate the whole DMU and its sub-units at the same time. Considering simultaneously the common/dedicated/shared inputs/outputs, the proposed approach is more general than the existing approaches in the literature. In the described approach, the same type of input or output is assumed to have the same weight for all sub-units within one DMU. More importantly, the proposed model imposes different proportions of shared inputs/outputs on different DMUs’ sub-units when measuring the efficiency for each DMU.

Details

Kybernetes, vol. 44 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 14 October 2020

Zhijian Duan and Gongnan Xie

The discontinuous Galerkin finite element method (DGFEM) is very suited for realizing high order resolution approximations on unstructured grids for calculating the hyperbolic…

Abstract

Purpose

The discontinuous Galerkin finite element method (DGFEM) is very suited for realizing high order resolution approximations on unstructured grids for calculating the hyperbolic conservation law. However, it requires a significant amount of computing resources. Therefore, this paper aims to investigate how to solve the Euler equations in parallel systems and improve the parallel performance.

Design/methodology/approach

Discontinuous Galerkin discretization is used for the compressible inviscid Euler equations. The multi-level domain decomposition strategy was used to deal with the computational grids and ensure the calculation load balancing. The total variation diminishing (TVD) Runge–Kutta (RK) scheme coupled with the multigrid strategy was employed to further improve parallel efficiency. Moreover, the Newton Block Gauss–Seidel (GS) method was adopted to accelerate convergence and improve the iteration efficiency.

Findings

Numerical experiments were implemented for the compressible inviscid flow problems around NACA0012 airfoil, over M6 wing and DLR-F6 configuration. The parallel acceleration is near to a linear convergence. The results indicate that the present parallel algorithm can reduce computational time significantly and allocate memory reasonably, which has high parallel efficiency and speedup, and it is well-suited to large-scale scientific computational problems on multiple instruction stream multiple data stream model.

Originality/value

The parallel DGFEM coupled with TVD RK and the Newton Block GS methods was presented for hyperbolic conservation law on unstructured meshes.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 31 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 5 April 2024

Abhishek Kumar Singh and Krishna Mohan Singh

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and…

Abstract

Purpose

In the present work, we focus on developing an in-house parallel meshless local Petrov-Galerkin (MLPG) code for the analysis of heat conduction in two-dimensional and three-dimensional regular as well as complex geometries.

Design/methodology/approach

The parallel MLPG code has been implemented using open multi-processing (OpenMP) application programming interface (API) on the shared memory multicore CPU architecture. Numerical simulations have been performed to find the critical regions of the serial code, and an OpenMP-based parallel MLPG code is developed, considering the critical regions of the sequential code.

Findings

Based on performance parameters such as speed-up and parallel efficiency, the credibility of the parallelization procedure has been established. Maximum speed-up and parallel efficiency are 10.94 and 0.92 for regular three-dimensional geometry (343,000 nodes). Results demonstrate the suitability of parallelization for larger nodes as parallel efficiency and speed-up are more for the larger nodes.

Originality/value

Few attempts have been made in parallel implementation of the MLPG method for solving large-scale industrial problems. Although the literature suggests that message-passing interface (MPI) based parallel MLPG codes have been developed, the OpenMP model has rarely been touched. This work is an attempt at the development of OpenMP-based parallel MLPG code for the very first time.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 7 October 2014

Michael J. Brown, Arun Subramanian, Timothy B. Curry, Daryl J. Kor, Steven L. Moran and Thomas R. Rohleder

Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to…

1121

Abstract

Purpose

Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity.

Design/methodology/approach

Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime.

Findings

Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon.

Research limitations/implications

Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability.

Practical implications

Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved.

Originality/value

Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

Details

International Journal of Health Care Quality Assurance, vol. 27 no. 8
Type: Research Article
ISSN: 0952-6862

Keywords

Article
Publication date: 1 August 2004

A. MacFarlane, S.E. Robertson and J.A. McCann

In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in…

Abstract

In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.

Details

Aslib Proceedings, vol. 56 no. 4
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 1 March 1998

E. Stein and M. Kreienmeyer

The boundary element method (BEM) and the finite element method (FEM) may be computationally expensive if complex problems are to be solved; thus there is the need of implementing…

Abstract

The boundary element method (BEM) and the finite element method (FEM) may be computationally expensive if complex problems are to be solved; thus there is the need of implementing them on fast computer architectures, especially parallel computers. Because these methods are complementary to each other, the coupling of FEM and BEM is widely used. In this paper, the coupling of displacement‐based FEM and collocation BEM and its implementation on a distributed memory system (Parsytec MultiCluster2) is described. The parallelization is performed by data partitioning which leads to a very high efficiency. As model problems, we assume linear elasticity for the boundary element method and elastoplasticity for the finite element method. The efficiency of our implementation is shown by various test examples. By numerical examples we show that a multiplicative Schwarz method for coupling BEM with FEM is very well suited for parallel implementation.

Details

Engineering Computations, vol. 15 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 13 June 2016

Zahur Ullah, Will Coombs and C Augarde

A variety of meshless methods have been developed in the last 20 years with an intention to solve practical engineering problems, but are limited to small academic problems due to…

Abstract

Purpose

A variety of meshless methods have been developed in the last 20 years with an intention to solve practical engineering problems, but are limited to small academic problems due to associated high computational cost as compared to the standard finite element methods (FEM). The purpose of this paper is to develop an efficient and accurate algorithms based on meshless methods for the solution of problems involving both material and geometrical nonlinearities.

Design/methodology/approach

A parallel two-dimensional linear elastic computer code is presented for a maximum entropy basis functions based meshless method. The two-dimensional algorithm is subsequently extended to three-dimensional adaptive nonlinear and three-dimensional parallel nonlinear adaptively coupled finite element, meshless method cases. The Prandtl-Reuss constitutive model is used to model elasto-plasticity and total Lagrangian formulations are used to model finite deformation. Furthermore, Zienkiewicz and Zhu and Chung and Belytschko error estimation procedure are used in the FE and meshless regions of the problem domain, respectively. The message passing interface library and open-source software packages, METIS and MUltifrontal Massively Parallel Solver are used for the high performance computation.

Findings

Numerical examples are given to demonstrate the correct implementation and performance of the parallel algorithms. The agreement between the numerical and analytical results in the case of linear elastic example is excellent. For the nonlinear problems load-displacement curve are compared with the reference FEM and found in a very good agreement. As compared to the FEM, no volumetric locking was observed in the case of meshless method. Furthermore, it is shown that increasing the number of processors up to a given number improve the performance of parallel algorithms in term of simulation time, speedup and efficiency.

Originality/value

Problems involving both material and geometrical nonlinearities are of practical importance in many engineering applications, e.g. geomechanics, metal forming and biomechanics. A family of parallel algorithms has been developed in this paper for these problems using adaptively coupled finite element, meshless method (based on maximum entropy basis functions) for distributed memory computer architectures.

Article
Publication date: 1 November 2001

J.G. Marakis, J. Chamiço, G. Brenner and F. Durst

Notes that, in a full‐scale application of the Monte Carlo method for combined heat transfer analysis, problems usually arise from the large computing requirements. Here the…

Abstract

Notes that, in a full‐scale application of the Monte Carlo method for combined heat transfer analysis, problems usually arise from the large computing requirements. Here the method to overcome this difficulty is the parallel execution of the Monte Carlo method in a distributed computing environment. Addresses the problem of determination of the temperature field formed under the assumption of radiative equilibrium in an enclosure idealizing an industrial furnace. The medium contained in this enclosure absorbs, emits and scatters anisotropically thermal radiation. Discusses two topics in detail: first, the efficiency of the parallelization of the developed code, and second, the influence of the scattering behavior of the medium. The adopted parallelization method for the first topic is the decomposition of the statistical sample and its subsequent distribution among the available processors. The measured high efficiencies showed that this method is particularly suited to the target architecture of this study, which is a dedicated network of workstations supporting the message passing paradigm. For the second topic, the results showed that taking into account the isotropic scattering, as opposed to neglecting the scattering, has a pronounced impact on the temperature distribution inside the enclosure. In contrast, the consideration of the sharply forward scattering, that is characteristic of all the real combustion particles, leaves the predicted temperature field almost undistinguishable from the absorbing/emitting case.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 11 no. 7
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 18 April 2017

Lin Deng, Junjie Liang, Yun Zhang, Huamin Zhou and Zhigao Huang

Lattice Boltzmann method (LBM) has made great success in computational fluid dynamics, and this paper aims to establish an efficient simulation model for the polymer injection…

367

Abstract

Purpose

Lattice Boltzmann method (LBM) has made great success in computational fluid dynamics, and this paper aims to establish an efficient simulation model for the polymer injection molding process using the LBM. The study aims to validate the capacity of the model for accurately predicting the injection molding process, to demonstrate the superior numerical efficiency in comparison with the current model based on the finite volume method (FVM).

Design/methodology/approach

The study adopts the stable multi-relaxation-time scheme of LBM to model the non-Newtonian polymer flow during the filling process. The volume of fluid method is naturally integrated to track the movement of the melt front. Additionally, a novel fractional-step thermal LBM is used to solve the convection-diffusion equation of the temperature field evolution, which is of high Peclet number. Through various simulation cases, the accuracy and stability of the present model are validated, and the higher numerical efficiency verified in comparison with the current FVM-based model.

Findings

The paper provides an efficient alternative to the current models in the simulation of polymer injection molding. Through the test cases, the model presented in this paper accurately predicts the filling process and successfully reproduces several characteristic phenomena of injection molding. Moreover, compared with the popular FVM-based models, the present model shows superior numerical efficiency, more fit for the future trend of parallel computing.

Research limitations/implications

Limited by the authors’ hardware resources, the programs of the present model and the FVM-based model are run on parallel up to 12 threads, which is adequate for most simulations of polymer injection molding. Through the tests, the present model has demonstrated the better numerical efficiency, and it is recommended for the researcher to investigate the parallel performance on even larger-scale parallel computing, with more threads.

Originality/value

To the authors’ knowledge, it is for the first time that the lattice Boltzmann method is applied in the simulation of injection molding, and the proposed model does obviously better in numerical efficiency than the current popular FVM-based models.

Details

Engineering Computations, vol. 34 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 25000