Search results

1 – 10 of over 85000
Article
Publication date: 1 June 1976

B.M. Doouss and G.L. Collins

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach…

67

Abstract

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach to successful implementation of the technique. The approach is then illustrated by reference to a case study of experience in Birds Eye Foods. The planning process by which computing strategy for the company was decided is described, and the planning conclusions reached to date are given. The current state of development in the company is outlined and the very real savings so far achieved are specified. Finally, the main conclusions of the monograph are brought together. In essence these conclusions are that major savings are achievable using distributed intelligence, and that the implementation of a company data processing plan can be made quicker and simpler by its use. However, careful central control must be maintained so as to avoid fragmentation of machine, language skills, and application taking place.

Details

Management Decision, vol. 14 no. 6
Type: Research Article
ISSN: 0025-1747

Article
Publication date: 15 May 2019

Usha Manasi Mohapatra, Babita Majhi and Alok Kumar Jagadev

The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed algorithms…

Abstract

Purpose

The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations. In addition, the models are tested for nonlinear systems with different noise conditions. In a nutshell, the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.

Design/methodology/approach

Population-based evolutionary algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and cat swarm optimization (CSO) are implemented in a distributed form to address the system identification problem having distributed input data. Out of different distributed approaches mentioned in the literature, the study has considered incremental and diffusion strategies.

Findings

Performances of the proposed distributed learning-based algorithms are compared for different noise conditions. The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate, but incremental CSO is slightly superior to diffusion CSO.

Originality/value

This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems. Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

435

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 December 1999

Michael J. Frasciello and John Richardson

Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations? Does the…

817

Abstract

Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations? Does the software’s architecture accommodate changing requirements? Does the system provide seamless behavior? Contends that the evolution of distributed enterprise computing technology has brought the library automation industry to a new realization that automation systems engineered with an n‐tiered client/server architecture will best meet the needs of library consortia. Standards‐based distributed processing is the key to the n‐tier client/server paradigm. While some technologies (i.e. UNIX) provide for a single standard on which to define distributed processing, only Microsoft’s Windows NT supports multiple standards. From Microsoft’s perspective, the Windows NT operating system is the middle tier of the n‐tier client/server environment. To truly exploit the middle tier, an application must utilize Microsoft Transaction Server (MTS). Native Windows NT automation systems utilizing MTS are best positioned for the future because MTS assumes an n‐tier architecture with the middle tier (or tiers) deployed on Windows NT Server. “Native” NT applications are built in and for Microsoft Windows NT. Library consortia considering a native Windows NT automation system should evaluate the system’s distributed processing capabilities to determine its applicability to their needs. Library consortia can test a vendor’s claim to scalable distributed processing by asking three questions: Is the software dependent on the type of data being used? Does the software support logical and physical separation (distribution)? Does the software require a systems‐shut down to perform database or application updates?

Details

Library Consortium Management: An International Journal, vol. 1 no. 3/4
Type: Research Article
ISSN: 1466-2760

Keywords

Article
Publication date: 11 September 2007

Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an…

Abstract

Purpose

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.

Design/methodology/approach

It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.

Findings

It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.

Research limitations/implications

The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.

Originality/value

The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.

Details

Journal of Manufacturing Technology Management, vol. 18 no. 7
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 1 March 1994

William J. Caelli

Distributed computing systems impose new requirements on the security ofthe operating systems and hardware structures of the computersparticipating in a distributed data network…

2243

Abstract

Distributed computing systems impose new requirements on the security of the operating systems and hardware structures of the computers participating in a distributed data network environment. It is proposed that multiple level (greater than two) security hardware, with associated full support for that hardware at the operating system level, is required to meet the needs of this emerging environment. The normal two layer (supervisor/user) structure may probably be insufficient to enforce and protect security functions consistently and reliably in a distributed environment. Such two‐layer designs are seen as part of earlier single computer/processor system structures while a minimum three/four‐layer security architecture appears necessary to meet the needs of the distributed computing environment. Such multi‐level hardware security architecture requirements are derived from earlier work in the area, particularly the Multics project of the mid‐1960s, as well as the design criteria for the DEC VAX 11/780 and Intel iAPX‐286 processor and its successors, as two later examples of machine structures. The security functions of individual nodes participating in a distributed computing environment, and their associated evaluation level, appear critical to the development of overall security architectures for the protection of distributed computing systems.

Details

Information Management & Computer Security, vol. 2 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 1 July 2006

Nkgatho Sylvester Tlale

In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of mobile…

1541

Abstract

Purpose

In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of mobile vehicle to move instantaneously in any direction. It is achieved by implementing Mecanum wheels in one vehicle and conventional wheels in another vehicle. The control requirements for omni‐directionality using the two above‐mentioned methods are that each wheel must be independently driven, and that all the four wheels must be synchronized in order to achieve the desired motion of each vehicle.

Design/methodology/approach

Distributed mechatronics controllers implementing Controller Area Network (CAN) modules are used to satisfy the control requirements of the vehicles. In distributed control architectures, failures in other parts of the control system can be compensated by other parts of the system. Three‐layered control architecture is implemented for; time‐critical tasks, event‐based tasks, and task planning. Global variables and broadcast communication is used on CAN bus. Messages are accepted in individual distributed controller modules by subscription.

Findings

Increase in the number of distributed modules increases the number of CAN bus messages required to achieve smooth working of the vehicles. This requires development of higher layer to manage the messages on the CAN bus.

Research limitations/implications

The limitation of the research is that analysis of the distributed controllers that were developed is complex, and that there are no universally accepted tool for conducting the analysis. The other limitation is that teh mathematical models of the mobile robot that have been developed need to be verified.

Practical implications

In the design of omni‐directional vehicles, reliability of the vehicle can be improved by modular design of mechanical system and electronic system of the wheel modules and the sensor modules.

Originality/value

The paper tries to show the advantages of distributed controller for omni‐directional vehicles. To the author's knowledge, that is a new concept.

Details

Industrial Robot: An International Journal, vol. 33 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1312

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 March 2006

Yanjun Zuo and Brajendra Panda

Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system is rather…

Abstract

Purpose

Damage assessment and recovery play key roles in the process of secure and reliable computer systems development. Post‐attack assessment in a distributed database system is rather complicated due to the indirect dependencies among sub‐transactions executed at different sites. Hence, the damage assessment procedure in these systems must be carried out in a collaborative way among all the participating sites in order to accurately detect all affected data items. This paper seeks to propose two approaches for achieving this, namely, centralized and peer‐to‐peer damage assessment models.

Design/methodology/approach

Each of the two proposed methods should be applied immediately after an intrusion on a distributed database system was reported. In the category of the centralized model, three sub‐models are further discussed, each of which is best suitable for a certain type of situations in a distributed database system.

Findings

Advantages and disadvantages of the models are analyzed on a comparative basis and the most suitable situations to which each model should apply are presented. A set of algorithms is developed to formally describe the damage assessment procedure for each model (sub‐model). Synchronization is essential in any system where multiple processes run concurrently. User‐level synchronization mechanisms have been presented to ensure that the damage assessment operations are conducted in a correct order.

Originality/value

The paper proposes two means for damage assessment.

Details

Information Management & Computer Security, vol. 14 no. 2
Type: Research Article
ISSN: 0968-5227

Keywords

Article
Publication date: 1 June 2010

Korina Katsaliaki and Navonil Mustafee

The purpose of this paper is to investigate the viability of using distributed simulation to execute large and complex healthcare simulation models which help government take…

1099

Abstract

Purpose

The purpose of this paper is to investigate the viability of using distributed simulation to execute large and complex healthcare simulation models which help government take informed decisions.

Design/methodology/approach

The paper compares the execution time of a standalone healthcare supply chain simulation with its distributed counterpart. Both the standalone and the distributed models are built using a commercial simulation package (CSP).

Findings

The results show that the execution time of the standalone healthcare supply chain simulation increases exponentially as the size and complexity of the system being modelled increases. On the other hand, using distributed simulation approach decreases the run time for large and complex models.

Research limitations/implications

The distributed approach of executing different parts of a single simulation model over different computers is only viable when the model: can be divided into logical parts and the exchange of information between these parts occurs at constant simulated time intervals; is sufficiently large and complicated, such that executing the model over a single processor is very time consuming.

Practical implications

Based on a feasibility study of the UK National Blood Service we demonstrate the effectiveness of distributed simulation and argue that it is a vital technique in healthcare informatics with respect to supporting decision making in large healthcare systems.

Originality/value

To the best of the knowledge, this is the first feasibility study in healthcare which shows the outcome of modelling and executing a distributed simulation using unmodified CSPs and a software/middleware for distributed simulation.

Details

Transforming Government: People, Process and Policy, vol. 4 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

1 – 10 of over 85000