Search results

11 – 20 of over 9000
Article
Publication date: 1 April 1980

JOHN WHITEHEAD

The ‘Office of the Future’, ‘Office Technology’, ‘Word Processing’, ‘Electronic Mail’, ‘Electronic Communications’, ‘Convergence’, ‘Information Management’. These are all terms…

Abstract

The ‘Office of the Future’, ‘Office Technology’, ‘Word Processing’, ‘Electronic Mail’, ‘Electronic Communications’, ‘Convergence’, ‘Information Management’. These are all terms included in the current list of buzz words used to describe current activities in the office technology area. Open the pages of almost any journal or periodical today and you will probably find an article or some reference to one or more of the above subjects. Long, detailed and highly technical theses are appearing on new techniques to automate and revolutionize the office environment. Facts and figures are quoted ad nauseam on the high current cost of writing a letter, filing letters, memos, reports and documents, trying to communicate with someone by telephone or other telecommunication means and, most significant of all, the high cost of people undertaking these never‐ending tasks. The high level of investment in factories and plants and the ever‐increasing fight to improve productivity by automating the dull, routine jobs are usually quoted and compared with the extremely low investment in improving and automating the equally tedious routine jobs in the office environment; the investment in the factory is quoted as being ten times greater per employee than in the office. This, however, is changing rapidly and investment on a large scale is already taking place in many areas as present‐day inflation bites hard, forcing many companies and organizations to take a much closer look at their office operations.

Details

Journal of Documentation, vol. 36 no. 4
Type: Research Article
ISSN: 0022-0418

Article
Publication date: 11 September 2009

Petar Ivanov and Kostadin Brandisky

The purpose of this paper is to present a parallel implementation of an evolution strategy (ES) algorithm for optimization of electromagnetic devices. It is intended for…

Abstract

Purpose

The purpose of this paper is to present a parallel implementation of an evolution strategy (ES) algorithm for optimization of electromagnetic devices. It is intended for multi‐core processors and for optimization problems that have objective function representing a numerical simulation of electromagnetic devices. The speed‐up of the optimization is evaluated as a function of the number of processor cores used.

Design/methodology/approach

Two parallelization approaches are implemented in the program developed – using multithreaded programming and using OpenMP. Their advantages and drawbacks are discussed. The program is tested on two examples for optimization of electromagnetic devices.

Findings

Using the developed parallel ES algorithm on a quad‐core processor, the optimization time can be reduced 2.4‐3 times, instead of the expected four times. This is due to a number of system processes and programs that run on part of the cores.

Originality/value

A new parallel ES optimization algorithm has been developed and investigated. The paper could be useful for researchers aiming to diminish the optimization time by using parallel evolution optimization on multi‐core processors.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 28 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 February 2006

Ian M. Smith and Lee Margetts

To investigate the cause of a well‐known phenomenon associated with a range of parallel iterative solvers – the variability in the number of iterations required to achieve…

Abstract

Purpose

To investigate the cause of a well‐known phenomenon associated with a range of parallel iterative solvers – the variability in the number of iterations required to achieve convergence.

Design/methodology/approach

The conclusions are based on extensive experiments undertaken using parallel computers. Recently published works are also used to provide additional examples of variability in iteration count.

Findings

The variability of iteration counts experienced by parallelised, element‐by‐element iterative solvers is caused by numerical precision and roundoff.

Research limitations/implications

A theoretical examination of the phenomenon may bring to light a methodology in which the iteration count could be limited to the lower end of the variable range – thus reducing solution times.

Practical implications

The authors believe that the variability in iteration count described for element‐by‐element methods presents no real difficulty to the engineering analyst.

Originality/value

The paper gives a detailed account of the phenomenon and is useful both to developers of parallel iterative solvers and to the analysts that use them in practice.

Details

Engineering Computations, vol. 23 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 September 2007

Michael Georg Grasser

Embedded technologies are one of the fastest growing sectors in information technology today and they are still open fields with many business opportunities. Hardly any new…

Abstract

Purpose

Embedded technologies are one of the fastest growing sectors in information technology today and they are still open fields with many business opportunities. Hardly any new product reaches the market without embedded systems components any more. However, the main technical challenges include the design and integration, as well as providing the necessary degree of security in an embedded system. This paper aims to focus on a new processor architecture introduced to face security issues.

Design/methodology/approach

In the short term, the main idea of this paper focuses on the implementation of a method for the improvement of code security through measurements in hardware that can be transparent to software developers. It was decided to develop a processor core extension that provides an improved capability against software vulnerabilities and improves the security of target systems passively. The architecture directly executes bound checking in hardware without performance loss, whereas checking in software would make any application intolerably slow.

Findings

Simulation results demonstrated that the proposed design offers a higher performance and security, when compared with other solutions. For the implementation of the Secure CPU, the SPARC V8‐based LEON 2 processor from Gaisler Research was used. The processor core was adapted and finally synthesised for a GR‐XC3S‐1500 board and extended.

Originality/value

As numerically, most systems run on dedicated hardware and not on high‐performance general purpose processors. There certainly exists a market even for new hardware to be used in real applications. Thus, the experience from the related project work can lead to valuable and marketable results for businesses and academics.

Details

International Journal of Web Information Systems, vol. 3 no. 1/2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 February 1987

Alex Shekhel and Eva Freeman

A parallel‐processor computer contains multiple CPUs that share such system resources as memory and disk storage. A parallel‐processor computer is expanded not by adding another…

Abstract

A parallel‐processor computer contains multiple CPUs that share such system resources as memory and disk storage. A parallel‐processor computer is expanded not by adding another computer, but by plugging another CPU into the computer. This technology offers expandability, compact size, high performance, high reliability, and moderate cost. The Sequent Balance Parallel‐Processor Computer is described in some detail. A fully configured Balance 21000 can execute 21 MIPS (million instructions per second). It implements the UNIX operating system, which has been widely adopted. As a result, many software packages for word processing and other applications are available from third‐party vendors. Performance tests conducted by CLSI, Inc. indicate that twenty concurrent users on a parallel‐processor system can perform CPU‐intense functions up to seven times faster than on a single‐processor system.

Details

Library Hi Tech, vol. 5 no. 2
Type: Research Article
ISSN: 0737-8831

Article
Publication date: 1 March 1980

J.B. Whitehead

Many years ago, when I was a young student, I remember listening to an eminent librarian of the day relating the story of a visit he had received from a work study team. After…

Abstract

Many years ago, when I was a young student, I remember listening to an eminent librarian of the day relating the story of a visit he had received from a work study team. After studiously checking all the library systems and processes they very seriously suggested that the librarian would save time and money if he stopped the typing of catalogue cards and hand wrote each one. That team, needless to say, was never invited back.

Details

Aslib Proceedings, vol. 32 no. 3
Type: Research Article
ISSN: 0001-253X

Article
Publication date: 1 March 1985

Ray Denenberg, Bob Rader, Thomas P. Brown, Wayne Davison and Fred Lauber

The Linked Systems project (LSP) is directed towards implementing computer‐to‐computer communications among its participants. The original three participants are the Library of…

Abstract

The Linked Systems project (LSP) is directed towards implementing computer‐to‐computer communications among its participants. The original three participants are the Library of Congress (LC), the Research Libraries Group (RLG), and the Western Library Network (WLN, formerly the Washington Library Network). The project now has a fourth participant, the Online Computer Library Center (OCLC). LSP consists of two major components. The first component, Authorities Implementation, is described in Library Hi Tech issue 10 (page 61). The second component, the Standard Network Interconnection (SNI), is the specification of the LSP protocols, and the implementation of these protocols on the participant systems. Protocol specification was a joint effort of the original three participants (LC, RLG, and WLN) and was described in Library Hi Tech issue 10 (page 71). Implementation, however, has consisted of individual efforts of the (now) four participants. This four‐part report focuses on these individual implementation efforts.

Details

Library Hi Tech, vol. 3 no. 3
Type: Research Article
ISSN: 0737-8831

Article
Publication date: 1 April 1994

O. Klaas, M. Kreienmeyer and E. Stein

This paper presents the development of a parallel finiteelement algorithm for a MIMD parallel computer. Theelements are distributed, onto the processors, in such a waythat…

Abstract

This paper presents the development of a parallel finite element algorithm for a MIMD parallel computer. The elements are distributed, onto the processors, in such a way that neighbouring elements are placed onto neighbouring processors. This guarantees a good load‐balancing even in physically non‐linear computations. The distribution of the columns of the global system of equations is done by an election algorithm. For solving the global system of equations we use a parallel preconditioned conjugate gradient solver. Tests were done with an elastoplastic material model for proving the efficiency of assembling and solving the system of equations.

Details

Engineering Computations, vol. 11 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 July 2004

Ingrid Hepner, Anne Wilcock and May Aung

The objective of this study was to explore the use of auditing as a tool for continual improvement in the meat industry of Southwestern Ontario, Canada. Participants in the study…

1960

Abstract

The objective of this study was to explore the use of auditing as a tool for continual improvement in the meat industry of Southwestern Ontario, Canada. Participants in the study represented the supply chain and included federal slaughterhouses, federal processors of ready‐to‐eat meat products, government agencies involved in auditing and inspection, and the retail sector involved in the auditing of meat facilities. Using in‐depth interviews, the extent of auditing and its implementation on the continual improvement process were explored. Auditing activities were conducted as required for government recognition, retailer approval, and the facility's maintenance of its Hazard Analysis Critical Control Point (HACCP) programme. Correction of deviations identified during audits led to continual improvement activities. However, only two of the participants described secondary quality management schemes that linked auditing with continual improvement.

Details

British Food Journal, vol. 106 no. 7
Type: Research Article
ISSN: 0007-070X

Keywords

Article
Publication date: 30 April 2020

Hongbin Liu, Hu Ren, Hanfeng Gu, Fei Gao and Guangwen Yang

The purpose of this paper is to provide an automatic parallelization toolkit for unstructured mesh-based computation. Among all kinds of mesh types, unstructured meshes are…

Abstract

Purpose

The purpose of this paper is to provide an automatic parallelization toolkit for unstructured mesh-based computation. Among all kinds of mesh types, unstructured meshes are dominant in engineering simulation scenarios and play an essential role in scientific computations for their geometrical flexibility. However, the high-fidelity applications based on unstructured grids are still time-consuming, no matter for programming or running.

Design/methodology/approach

This study develops an efficient UNstructured Acceleration Toolkit (UNAT), which provides friendly high-level programming interfaces and elaborates lower level implementation on the target hardware to get nearly hand-optimized performance. At the present state, two efficient strategies, a multi-level blocks method and a row-subsections method, are designed and implemented on Sunway architecture. Random memory access and write–write conflict issues of unstructured meshes have been handled by partitioning, coloring and other hardware-specific techniques. Moreover, a data-reuse mechanism is developed to increase the computational intensity and alleviate the memory bandwidth bottleneck.

Findings

The authors select sparse matrix-vector multiplication as a performance benchmark of UNAT across different data layouts and different matrix formats. Experimental results show that the speed-ups reach up to 26× compared to single management processing element, and the utilization ratio tests indicate the capability of achieving nearly hand-optimized performance. Finally, the authors adopt UNAT to accelerate a well-tuned unstructured solver and obtain speed-ups of 19× and 10× on average for main kernels and overall solver, respectively.

Originality/value

The authors design an unstructured mesh toolkit, UNAT, to link the hardware and numerical algorithm, and then, engineers can focus on the algorithms and solvers rather than the parallel implementation. For the many-core processor SW26010 of the fastest supercomputer in China, UNAT yields up to 26× speed-ups and achieves nearly hand-optimized performance.

Details

Engineering Computations, vol. 37 no. 9
Type: Research Article
ISSN: 0264-4401

Keywords

11 – 20 of over 9000