Search results

1 – 10 of over 19000
Article
Publication date: 21 December 2021

Laouni Djafri

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…

384

Abstract

Purpose

This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.

Design/methodology/approach

In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.

Findings

The authors got very satisfactory classification results.

Originality/value

DDPML system is specially designed to smoothly handle big data mining classification.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1205

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 8 November 2019

Peter Simon Sapaty

The chapter describes the basics of developed high-level spatial grasp technology (SGT) and its spatial grasp language (SGL) allowing us to create and manage very large distributed

Abstract

The chapter describes the basics of developed high-level spatial grasp technology (SGT) and its spatial grasp language (SGL) allowing us to create and manage very large distributed systems in physical, virtual and executive domains in a highly parallel manner and without any centralized resources. Main features of SGT with its self-evolving and self-spreading spatial intelligence, recursive nature of SGL and organization of its networked interpreter will be briefed. Numerous interpreter copies can be installed worldwide and integrated with other systems or operate autonomously and collectively in critical situations. Relation of SGT, with capability of holistic solutions in distributed systems, to the gestalt psychology and theory, showing unique qualities of human mind and brain to directly grasp the whole of different phenomena, will be explained too, with SGT serving as an attempt to implement the notion of gestalt for distributed applications.

Details

Complexity in International Security
Type: Book
ISBN: 978-1-78973-716-5

Book part
Publication date: 8 November 2019

Peter Simon Sapaty

The chapter offers complete details of the latest SGL version particularly suitable for dealing with large security systems and emerging crisis situations. It describes main types…

Abstract

The chapter offers complete details of the latest SGL version particularly suitable for dealing with large security systems and emerging crisis situations. It describes main types of constants representing information, physical matter or both and five very different and specific types of variables operating in fully distributed spaces and even being mobile themselves when serving spreading algorithms. Also given full repertoire of the language operations, called rules, which can be arbitrarily nested and carry different navigation, creation, processing, assignment, control, verification, context, exchange, transference, echoing, timing and other loads. The rules equally operate with local and remote values, process both, matter and distributed networked knowledge, and can express active graph-based patterns navigating, matching, conquering and changing distributed environments. Elementary programming examples in SGL are also provided.

Details

Complexity in International Security
Type: Book
ISBN: 978-1-78973-716-5

Article
Publication date: 4 March 2014

Yuji Sato and Mikiko Sato

The purpose of this paper is to propose a fault-tolerant technology for increasing the durability of application programs when evolutionary computation is performed by fast…

Abstract

Purpose

The purpose of this paper is to propose a fault-tolerant technology for increasing the durability of application programs when evolutionary computation is performed by fast parallel processing on many-core processors such as graphics processing units (GPUs) and multi-core processors (MCPs).

Design/methodology/approach

For distributed genetic algorithm (GA) models, the paper proposes a method where an island's ID number is added to the header of data transferred by this island for use in fault detection.

Findings

The paper has shown that the processing time of the proposed idea is practically negligible in applications and also shown that an optimal solution can be obtained even with a single stuck-at fault or a transient fault, and that increasing the number of parallel threads makes the system less susceptible to faults.

Originality/value

The study described in this paper is a new approach to increase the sustainability of application program using distributed GA on GPUs and MCPs.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 March 1990

Kees de Leeuw

The major developments and trends in computer hardware and software are discussed, and those developments that will influence the use of computers in logistics in the coming…

Abstract

The major developments and trends in computer hardware and software are discussed, and those developments that will influence the use of computers in logistics in the coming decade are highlighted. The main areas in the logistics operation where changes will take place are highlighted. The human element is examined, particularly the type of people needed to operate future logistical computer systems.

Details

Logistics Information Management, vol. 3 no. 3
Type: Research Article
ISSN: 0957-6053

Keywords

Article
Publication date: 1 May 2005

Tore Fjellheim, Stephen Milliner and Marlon Dumas

Mobile devices have received much research interest in recent years. Mobility raises new issues such as more dynamic context, limited computing resources, and frequent…

Abstract

Mobile devices have received much research interest in recent years. Mobility raises new issues such as more dynamic context, limited computing resources, and frequent disconnections. A middleware infrastructure for mobile computing must handle all these issues properly. In this project we propose a middleware, called 3DMA, to support mobile computing. We introduce three requirements, distribution, decoupling and decomposition as central issues for mobile middleware. 3DMA uses a space based middleware, which facilitates the implementation of decoupled behavior and support for disconnected operation and context awareness. This is done by defining a set of “workers” which are able to act on the users behalf either: to reduce load on the mobile device, and/or to support disconnected behavior. In order to demonstrate aspects of the middleware architecture we then consider the development of a commonly used mobile application.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Book part
Publication date: 30 January 2023

Peter Simon Sapaty

Abstract

Details

The Spatial Grasp Model
Type: Book
ISBN: 978-1-80455-574-3

Article
Publication date: 1 March 2006

Dimitris Kehagias, Michael Grivas, Basilis Mamalis and Grammati Pantziou

The purpose of this paper is to evaluate the use of a non‐expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research…

Abstract

Purpose

The purpose of this paper is to evaluate the use of a non‐expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure.

Design/methodology/approach

Clusters, built using commodity‐off‐the‐shelf (COTS) hardware components and free, or commonly used, software, provide an inexpensive computing resource to educational institutions. The Department of Informatics of TEI, Athens, has built a dynamic clustering system consisting of a Beowulf‐class cluster and a NoW called DYNER (DYNamic clustER). This paper evaluates the use of the DYNER system, as a platform for running the laboratory work of various courses (parallel computing, operating systems, distributed computing), as well as various parallel applications in the framework of research, which is in progress under on‐going research projects. Three distinct groups from the academic community of the TEI of Athens can benefit directly from the DYNER platform: the students of the Department of Informatics, the faculty members and researchers of the department, and researchers from other departments of the institution.

Findings

The results obtained were positive and satisfactory. The use of the dynamic cluster offers to the students new abilities regarding high performance computing, which will improve their potential for professional excellence.

Research limitations/implications

The implications of this research study are that the students clarified issues, such as “doubling the number of processors does not mean doubling execution speed”, and learned how to build and configure a cluster without going deeply into the complexity of the software set‐up.

Practical implications

This research provides students with the ability to gain hands‐on experience on a not very common to them but useful platform, and faculty members – from a variety of disciplines – to get more computing power for their research.

Originality/value

This paper presents a dynamic clustering system where, its versatility and flexibility with respect to configuration and functionality, together with its dynamic, strong computational power, renders it to a very helpful tool for educational and research purposes.

Details

Campus-Wide Information Systems, vol. 23 no. 2
Type: Research Article
ISSN: 1065-0741

Keywords

Article
Publication date: 25 June 2020

Abedalmuhdi Almomany, Ahmad M. Al-Omari, Amin Jarrah and Mohammad Tawalbeh

The problem of motif discovery has become a significant challenge in the era of big data where there are hundreds of genomes requiring annotations. The importance of motifs has…

Abstract

Purpose

The problem of motif discovery has become a significant challenge in the era of big data where there are hundreds of genomes requiring annotations. The importance of motifs has led many researchers to develop different tools and algorithms for finding them. The purpose of this paper is to propose a new algorithm to increase the speed and accuracy of the motif discovering process, which is the main drawback of motif discovery algorithms.

Design/methodology/approach

All motifs are sorted in a tree-based indexing structure where each motif is created from a combination of nucleotides: ‘A’, ‘C’, ‘T’ and ‘G’. The full motif can be discovered by extending the search around 4-mer nucleotides in both directions, left and right. Resultant motifs would be identical or degenerated with various lengths.

Findings

The developed implementation discovers conserved string motifs in DNA without having prior information about the motifs. Even for a large data set that contains millions of nucleotides and thousands of very long sequences, the entire process is completed in a few seconds.

Originality/value

Experimental results demonstrate the efficiency of the proposed implementation; as for a real-sequence of 1,270,000 nucleotides spread into 2,000 samples, it takes 5.9 s to complete the overall discovering process when the code ran on an Intel Core i7-6700 @ 3.4 GHz machine and 26.7 s when running on an Intel Xeon x5670 @ 2.93 GHz machine. In addition, the authors have improved computational performance by parallelizing the implementation to run on multi-core machines using the OpenMP framework. The speedup achieved by parallelizing the implementation is scalable and proportional to the number of processors with a high efficiency that is close to 100%.

Details

Engineering Computations, vol. 38 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 19000