Search results
1 – 10 of over 103000This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach…
Abstract
This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach to successful implementation of the technique. The approach is then illustrated by reference to a case study of experience in Birds Eye Foods. The planning process by which computing strategy for the company was decided is described, and the planning conclusions reached to date are given. The current state of development in the company is outlined and the very real savings so far achieved are specified. Finally, the main conclusions of the monograph are brought together. In essence these conclusions are that major savings are achievable using distributed intelligence, and that the implementation of a company data processing plan can be made quicker and simpler by its use. However, careful central control must be maintained so as to avoid fragmentation of machine, language skills, and application taking place.
Usha Manasi Mohapatra, Babita Majhi and Alok Kumar Jagadev
The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed algorithms…
Abstract
Purpose
The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems. The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations. In addition, the models are tested for nonlinear systems with different noise conditions. In a nutshell, the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.
Design/methodology/approach
Population-based evolutionary algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and cat swarm optimization (CSO) are implemented in a distributed form to address the system identification problem having distributed input data. Out of different distributed approaches mentioned in the literature, the study has considered incremental and diffusion strategies.
Findings
Performances of the proposed distributed learning-based algorithms are compared for different noise conditions. The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate, but incremental CSO is slightly superior to diffusion CSO.
Originality/value
This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems. Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task.
Details
Keywords
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…
Abstract
Purpose
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.
Design/methodology/approach
In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.
Findings
The authors got very satisfactory classification results.
Originality/value
DDPML system is specially designed to smoothly handle big data mining classification.
Details
Keywords
Michael J. Frasciello and John Richardson
Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations? Does the…
Abstract
Library consortia require automation systems that adequately address the following questions: Can the system support centralized and decentralized server configurations? Does the software’s architecture accommodate changing requirements? Does the system provide seamless behavior? Contends that the evolution of distributed enterprise computing technology has brought the library automation industry to a new realization that automation systems engineered with an n‐tiered client/server architecture will best meet the needs of library consortia. Standards‐based distributed processing is the key to the n‐tier client/server paradigm. While some technologies (i.e. UNIX) provide for a single standard on which to define distributed processing, only Microsoft’s Windows NT supports multiple standards. From Microsoft’s perspective, the Windows NT operating system is the middle tier of the n‐tier client/server environment. To truly exploit the middle tier, an application must utilize Microsoft Transaction Server (MTS). Native Windows NT automation systems utilizing MTS are best positioned for the future because MTS assumes an n‐tier architecture with the middle tier (or tiers) deployed on Windows NT Server. “Native” NT applications are built in and for Microsoft Windows NT. Library consortia considering a native Windows NT automation system should evaluate the system’s distributed processing capabilities to determine its applicability to their needs. Library consortia can test a vendor’s claim to scalable distributed processing by asking three questions: Is the software dependent on the type of data being used? Does the software support logical and physical separation (distribution)? Does the software require a systems‐shut down to perform database or application updates?
Details
Keywords
Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an…
Abstract
Purpose
The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.
Design/methodology/approach
It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.
Findings
It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.
Research limitations/implications
The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.
Originality/value
The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.
Details
Keywords
Distributed computing systems impose new requirements on the security ofthe operating systems and hardware structures of the computersparticipating in a distributed data network…
Abstract
Distributed computing systems impose new requirements on the security of the operating systems and hardware structures of the computers participating in a distributed data network environment. It is proposed that multiple level (greater than two) security hardware, with associated full support for that hardware at the operating system level, is required to meet the needs of this emerging environment. The normal two layer (supervisor/user) structure may probably be insufficient to enforce and protect security functions consistently and reliably in a distributed environment. Such two‐layer designs are seen as part of earlier single computer/processor system structures while a minimum three/four‐layer security architecture appears necessary to meet the needs of the distributed computing environment. Such multi‐level hardware security architecture requirements are derived from earlier work in the area, particularly the Multics project of the mid‐1960s, as well as the design criteria for the DEC VAX 11/780 and Intel iAPX‐286 processor and its successors, as two later examples of machine structures. The security functions of individual nodes participating in a distributed computing environment, and their associated evaluation level, appear critical to the development of overall security architectures for the protection of distributed computing systems.
Details
Keywords
In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of mobile…
Abstract
Purpose
In this paper, two omni‐directional mobile vehicles are designed and controlled implementing distributed mechatronics controllers. Omni‐directionality is the ability of mobile vehicle to move instantaneously in any direction. It is achieved by implementing Mecanum wheels in one vehicle and conventional wheels in another vehicle. The control requirements for omni‐directionality using the two above‐mentioned methods are that each wheel must be independently driven, and that all the four wheels must be synchronized in order to achieve the desired motion of each vehicle.
Design/methodology/approach
Distributed mechatronics controllers implementing Controller Area Network (CAN) modules are used to satisfy the control requirements of the vehicles. In distributed control architectures, failures in other parts of the control system can be compensated by other parts of the system. Three‐layered control architecture is implemented for; time‐critical tasks, event‐based tasks, and task planning. Global variables and broadcast communication is used on CAN bus. Messages are accepted in individual distributed controller modules by subscription.
Findings
Increase in the number of distributed modules increases the number of CAN bus messages required to achieve smooth working of the vehicles. This requires development of higher layer to manage the messages on the CAN bus.
Research limitations/implications
The limitation of the research is that analysis of the distributed controllers that were developed is complex, and that there are no universally accepted tool for conducting the analysis. The other limitation is that teh mathematical models of the mobile robot that have been developed need to be verified.
Practical implications
In the design of omni‐directional vehicles, reliability of the vehicle can be improved by modular design of mechanical system and electronic system of the wheel modules and the sensor modules.
Originality/value
The paper tries to show the advantages of distributed controller for omni‐directional vehicles. To the author's knowledge, that is a new concept.
Details
Keywords
This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…
Abstract
This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.
Details