Search results

1 – 10 of over 81000
To view the access options for this content please click here
Article
Publication date: 1 June 1976

B.M. Doouss and G.L. Collins

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the…

Downloads
61

Abstract

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach to successful implementation of the technique. The approach is then illustrated by reference to a case study of experience in Birds Eye Foods. The planning process by which computing strategy for the company was decided is described, and the planning conclusions reached to date are given. The current state of development in the company is outlined and the very real savings so far achieved are specified. Finally, the main conclusions of the monograph are brought together. In essence these conclusions are that major savings are achievable using distributed intelligence, and that the implementation of a company data processing plan can be made quicker and simpler by its use. However, careful central control must be maintained so as to avoid fragmentation of machine, language skills, and application taking place.

Details

Management Decision, vol. 14 no. 6
Type: Research Article
ISSN: 0025-1747

To view the access options for this content please click here
Article
Publication date: 15 May 2019

Usha Manasi Mohapatra, Babita Majhi and Alok Kumar Jagadev

The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed…

Abstract

Purpose

The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations. In addition, the models are tested for nonlinear systems with different noise conditions. In a nutshell, the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.

Design/methodology/approach

Population-based evolutionary algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and cat swarm optimization (CSO) are implemented in a distributed form to address the system identification problem having distributed input data. Out of different distributed approaches mentioned in the literature, the study has considered incremental and diffusion strategies.

Findings

Performances of the proposed distributed learning-based algorithms are compared for different noise conditions. The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate, but incremental CSO is slightly superior to diffusion CSO.

Originality/value

This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems. Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 1999

Michael J. Frasciello and John Richardson

Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations…

Downloads
803

Abstract

Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations? Does the software’s architecture accommodate changing requirements? Does the system provide seamless behavior? Contends that the evolution of distributed enterprise computing technology has brought the library automation industry to a new realization that automation systems engineered with an n‐tiered client/server architecture will best meet the needs of library consortia. Standards‐based distributed processing is the key to the n‐tier client/server paradigm. While some technologies (i.e. UNIX) provide for a single standard on which to define distributed processing, only Microsoft’s Windows NT supports multiple standards. From Microsoft’s perspective, the Windows NT operating system is the middle tier of the n‐tier client/server environment. To truly exploit the middle tier, an application must utilize Microsoft Transaction Server (MTS). Native Windows NT automation systems utilizing MTS are best positioned for the future because MTS assumes an n‐tier architecture with the middle tier (or tiers) deployed on Windows NT Server. “Native” NT applications are built in and for Microsoft Windows NT. Library consortia considering a native Windows NT automation system should evaluate the system’s distributed processing capabilities to determine its applicability to their needs. Library consortia can test a vendor’s claim to scalable distributed processing by asking three questions: Is the software dependent on the type of data being used? Does the software support logical and physical separation (distribution)? Does the software require a systems‐shut down to perform database or application updates?

Details

Library Consortium Management: An International Journal, vol. 1 no. 3/4
Type: Research Article
ISSN: 1466-2760

Keywords

To view the access options for this content please click here
Book part
Publication date: 8 November 2019

Peter Simon Sapaty

Abstract

Details

Complexity in International Security
Type: Book
ISBN: 978-1-78973-716-5

To view the access options for this content please click here
Article
Publication date: 11 September 2007

Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency…

Abstract

Purpose

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.

Design/methodology/approach

It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.

Findings

It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.

Research limitations/implications

The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.

Originality/value

The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.

Details

Journal of Manufacturing Technology Management, vol. 18 no. 7
Type: Research Article
ISSN: 1741-038X

Keywords

Abstract

Details

Integrated Land-Use and Transportation Models
Type: Book
ISBN: 978-0-080-44669-1

To view the access options for this content please click here
Article
Publication date: 1 March 1994

William J. Caelli

Distributed computing systems impose new requirements on the security ofthe operating systems and hardware structures of the computersparticipating in a distributed data…

Downloads
2207

Abstract

Distributed computing systems impose new requirements on the security of the operating systems and hardware structures of the computers participating in a distributed data network environment. It is proposed that multiple level (greater than two) security hardware, with associated full support for that hardware at the operating system level, is required to meet the needs of this emerging environment. The normal two layer (supervisor/user) structure may probably be insufficient to enforce and protect security functions consistently and reliably in a distributed environment. Such two‐layer designs are seen as part of earlier single computer/processor system structures while a minimum three/four‐layer security architecture appears necessary to meet the needs of the distributed computing environment. Such multi‐level hardware security architecture requirements are derived from earlier work in the area, particularly the Multics project of the mid‐1960s, as well as the design criteria for the DEC VAX 11/780 and Intel iAPX‐286 processor and its successors, as two later examples of machine structures. The security functions of individual nodes participating in a distributed computing environment, and their associated evaluation level, appear critical to the development of overall security architectures for the protection of distributed computing systems.

Details

Information Management & Computer Security, vol. 2 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view…

Downloads
1012

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 1 July 2006

Nkgatho Sylvester Tlale

In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of…

Downloads
1502

Abstract

Purpose

In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of mobile vehicle to move instantaneously in any direction. It is achieved by implementing Mecanum wheels in one vehicle and conventional wheels in another vehicle. The control requirements for omni‐directionality using the two above‐mentioned methods are that each wheel must be independently driven, and that all the four wheels must be synchronized in order to achieve the desired motion of each vehicle.

Design/methodology/approach

Distributed mechatronics controllers implementing Controller Area Network (CAN) modules are used to satisfy the control requirements of the vehicles. In distributed control architectures, failures in other parts of the control system can be compensated by other parts of the system. Three‐layered control architecture is implemented for; time‐critical tasks, event‐based tasks, and task planning. Global variables and broadcast communication is used on CAN bus. Messages are accepted in individual distributed controller modules by subscription.

Findings

Increase in the number of distributed modules increases the number of CAN bus messages required to achieve smooth working of the vehicles. This requires development of higher layer to manage the messages on the CAN bus.

Research limitations/implications

The limitation of the research is that analysis of the distributed controllers that were developed is complex, and that there are no universally accepted tool for conducting the analysis. The other limitation is that teh mathematical models of the mobile robot that have been developed need to be verified.

Practical implications

In the design of omni‐directional vehicles, reliability of the vehicle can be improved by modular design of mechanical system and electronic system of the wheel modules and the sensor modules.

Originality/value

The paper tries to show the advantages of distributed controller for omni‐directional vehicles. To the author's knowledge, that is a new concept.

Details

Industrial Robot: An International Journal, vol. 33 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2006

Yanjun Zuo and Brajendra Panda

Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system

Abstract

Purpose

Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system is rather complicated due to the indirect dependencies among sub‐transactions executed at different sites. Hence, the damage assessment procedure in these systems must be carried out in a collaborative way among all the participating sites in order to accurately detect all affected data items. This paper seeks to propose two approaches for achieving this, namely, centralized and peer‐to‐peer damage assessment models.

Design/methodology/approach

Each of the two proposed methods should be applied immediately after an intrusion on a distributed database system was reported. In the category of the centralized model, three sub‐models are further discussed, each of which is best suitable for a certain type of situations in a distributed database system.

Findings

Advantages and disadvantages of the models are analyzed on a comparative basis and the most suitable situations to which each model should apply are presented. A set of algorithms is developed to formally describe the damage assessment procedure for each model (sub‐model). Synchronization is essential in any system where multiple processes run concurrently. User‐level synchronization mechanisms have been presented to ensure that the damage assessment operations are conducted in a correct order.

Originality/value

The paper proposes two means for damage assessment.

Details

Information Management & Computer Security, vol. 14 no. 2
Type: Research Article
ISSN: 0968-5227

Keywords

1 – 10 of over 81000