Search results

1 – 10 of 169
To view the access options for this content please click here
Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 11 September 2009

Petar Ivanov and Kostadin Brandisky

The purpose of this paper is to present a parallel implementation of an evolution strategy (ES) algorithm for optimization of electromagnetic devices. It is intended for…

Abstract

Purpose

The purpose of this paper is to present a parallel implementation of an evolution strategy (ES) algorithm for optimization of electromagnetic devices. It is intended for multi‐core processors and for optimization problems that have objective function representing a numerical simulation of electromagnetic devices. The speed‐up of the optimization is evaluated as a function of the number of processor cores used.

Design/methodology/approach

Two parallelization approaches are implemented in the program developed – using multithreaded programming and using OpenMP. Their advantages and drawbacks are discussed. The program is tested on two examples for optimization of electromagnetic devices.

Findings

Using the developed parallel ES algorithm on a quad‐core processor, the optimization time can be reduced 2.4‐3 times, instead of the expected four times. This is due to a number of system processes and programs that run on part of the cores.

Originality/value

A new parallel ES optimization algorithm has been developed and investigated. The paper could be useful for researchers aiming to diminish the optimization time by using parallel evolution optimization on multi‐core processors.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 28 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Book part
Publication date: 19 November 2014

Martin Burda

The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of…

Abstract

The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model choices that trade off flexibility with parameter parsimony. In the most flexible unrestricted BEKK the parameter dimensionality increases quickly with the number of variables. Covariance targeting decreases model dimensionality but induces a set of nonlinear constraints on the underlying parameter space that are difficult to implement. Recently, the rotated BEKK (RBEKK) has been proposed whereby a targeted BEKK model is applied after the spectral decomposition of the conditional covariance matrix. An easily estimable RBEKK implies a full albeit constrained BEKK for the unrotated returns. However, the degree of the implied restrictiveness is currently unknown. In this paper, we suggest a Bayesian approach to estimation of the BEKK model with targeting based on Constrained Hamiltonian Monte Carlo (CHMC). We take advantage of suitable parallelization of the problem within CHMC utilizing the newly available computing power of multi-core CPUs and Graphical Processing Units (GPUs) that enables us to deal effectively with the inherent nonlinear constraints posed by covariance targeting in relatively high dimensions. Using parallel CHMC we perform a model comparison in terms of predictive ability of the targeted BEKK with the RBEKK in the context of an application concerning a multivariate dynamic volatility analysis of a Dow Jones Industrial returns portfolio. Although the RBEKK does improve over a diagonal BEKK restriction, it is clearly dominated by the full targeted BEKK model.

Details

Bayesian Model Comparison
Type: Book
ISBN: 978-1-78441-185-5

Keywords

To view the access options for this content please click here
Article
Publication date: 17 October 2018

Sura Nawfal and Fakhrulddin Ali

The purpose of this paper is to achieve the acceleration of 3D object transformation using parallel techniques such as multi-core central processing unit (MC CPU) or…

Abstract

Purpose

The purpose of this paper is to achieve the acceleration of 3D object transformation using parallel techniques such as multi-core central processing unit (MC CPU) or graphic processing unit (GPU) or even both. Generating 3D animation scenes in computer graphics requires applying a 3D transformation on the vertices of the objects. These transformations consume most of the execution time. Hence, for high-speed graphic systems, acceleration of vertex transform is very much sought for because it requires many matrix operations (need) to be performed in a real time, so the execution time is essential for such processing.

Design/methodology/approach

In this paper, the acceleration of 3D object transformation is achieved using parallel techniques such as MC CPU or GPU or even both. Multiple geometric transformations are concatenated together at a time in any order in an interactive manner.

Findings

The performance results are presented for a number of 3D objects with paralleled implementations of the affine transform on the NVIDIA GPU series. The maximum execution time was about 0.508 s to transform 100 million vertices using LabVIEW and 0.096 s using Visual Studio. Other results also showed the significant speed-up compared to CPU, MC CPU and other previous work computations for the same object complexity.

Originality/value

The high-speed execution of 3D models is essential in many applications such as medical imaging, 3D games and robotics.

Details

Journal of Engineering, Design and Technology, vol. 16 no. 6
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article
Publication date: 25 June 2020

Abedalmuhdi Almomany, Ahmad M. Al-Omari, Amin Jarrah and Mohammad Tawalbeh

The problem of motif discovery has become a significant challenge in the era of big data where there are hundreds of genomes requiring annotations. The importance of…

Abstract

Purpose

The problem of motif discovery has become a significant challenge in the era of big data where there are hundreds of genomes requiring annotations. The importance of motifs has led many researchers to develop different tools and algorithms for finding them. The purpose of this paper is to propose a new algorithm to increase the speed and accuracy of the motif discovering process, which is the main drawback of motif discovery algorithms.

Design/methodology/approach

All motifs are sorted in a tree-based indexing structure where each motif is created from a combination of nucleotides: ‘A’, ‘C’, ‘T’ and ‘G’. The full motif can be discovered by extending the search around 4-mer nucleotides in both directions, left and right. Resultant motifs would be identical or degenerated with various lengths.

Findings

The developed implementation discovers conserved string motifs in DNA without having prior information about the motifs. Even for a large data set that contains millions of nucleotides and thousands of very long sequences, the entire process is completed in a few seconds.

Originality/value

Experimental results demonstrate the efficiency of the proposed implementation; as for a real-sequence of 1,270,000 nucleotides spread into 2,000 samples, it takes 5.9 s to complete the overall discovering process when the code ran on an Intel Core i7-6700 @ 3.4 GHz machine and 26.7 s when running on an Intel Xeon x5670 @ 2.93 GHz machine. In addition, the authors have improved computational performance by parallelizing the implementation to run on multi-core machines using the OpenMP framework. The speedup achieved by parallelizing the implementation is scalable and proportional to the number of processors with a high efficiency that is close to 100%.

Details

Engineering Computations, vol. 38 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 17 April 2009

Christine Connolly

The purpose of this paper is to review the progress of machine vision as it applies to automated assembly applications.

Downloads
1224

Abstract

Purpose

The purpose of this paper is to review the progress of machine vision as it applies to automated assembly applications.

Design/methodology/approach

A series of technological developments is described: 3D vision, smart cameras, near infrared (NIR) imaging and LED illumination. Associated with each are relevant assembly applications.

Findings

Advances in multi‐core processors are facilitating the development of 3D image processing algorithms for robot guidance and product inspection, which in turn enable the automation of skilful and labour‐intensive tasks. Machine vision products are becoming more capable, yet simpler to use. NIR imaging is useful for inspecting semiconductors and bottle filling. Advances in LED lighting address difficult inspection tasks at the macro and microscopic levels.

Originality/value

The paper recognises the emergence of 3D machine vision as a new tool in assembly automation. Updates engineers on other relevant machine vision advances.

Details

Assembly Automation, vol. 29 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 19 June 2009

Chantola Kit, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP)…

Downloads
1854

Abstract

Purpose

The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to compute aggregation functions based on XML data with multiple hierarchical levels. They play important roles in the online analytical processing of XML data, called XML‐OLAP, with which complex analysis over XML can be performed to discover valuable information from XML.

Design/methodology/approach

Several variations of algorithms are proposed for efficient T‐ROLLUP computation. First, two basic algorithms, top‐down algorithm (TDA) and bottom‐up algorithm (BUA), are presented in which the well‐known structural‐join algorithms are used. The paper then proposes more efficient algorithms, called single‐scan by preorder number and single‐scan by postorder number (SSC‐Pre/Post), which are also based on structural joins, but have been modified from the basic algorithms so that multiple levels of grouping are computed with a single scan over node lists. In addition, the paper attempts to adopt the algorithm for parallel execution in multi‐core environments.

Findings

Several experiments are conducted with XMark and synthetic XML data to show the effectiveness of the proposed algorithms. The experiments show that proposed algorithms perform much better than the naïve implementation. In particular, the proposed SSC‐Pre and SSC‐Post perform better than TDA and BUA for all cases. Beyond that, the experiment using the parallel single scan algorithm also shows better performance than the ordinary basic algorithm.

Research limitations/implications

This paper focuses on the T‐ROLLUP operation for XML data analysis. For this reason, other operations related to XML‐OLAP, such as CUBE, WINDOWING, and RANKING should also be investigated.

Originality/value

The paper presents an extended version of one of the award winning papers at iiWAS2008.

Details

International Journal of Web Information Systems, vol. 5 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 19 October 2015

Wasim Ahmad Bhat and S.M.K. Quadri

The purpose of this paper is to explore the challenges posed by Big Data to current trends in computation, networking and storage technology at various stages of Big Data…

Downloads
3067

Abstract

Purpose

The purpose of this paper is to explore the challenges posed by Big Data to current trends in computation, networking and storage technology at various stages of Big Data analysis. The work aims to bridge the gap between theory and practice, and highlight the areas of potential research.

Design/methodology/approach

The study employs a systematic and critical review of the relevant literature to explore the challenges posed by Big Data to hardware technology, and assess the worthiness of hardware technology at various stages of Big Data analysis. Online computer-databases were searched to identify the literature relevant to: Big Data requirements and challenges; and evolution and current trends of hardware technology.

Findings

The findings reveal that even though current hardware technology has not evolved with the motivation to support Big Data analysis, it significantly supports Big Data analysis at all stages. However, they also point toward some important shortcomings and challenges of current technology trends. These include: lack of intelligent Big Data sources; need for scalable real-time analysis capability; lack of support (in networks) for latency-bound applications; need for necessary augmentation (in network support) for peer-to-peer networks; and rethinking on cost-effective high-performance storage subsystem.

Research limitations/implications

The study suggests that a lot of research is yet to be done in hardware technology, if full potential of Big Data is to be unlocked.

Practical implications

The study suggests that practitioners need to meticulously choose the hardware infrastructure for Big Data considering the limitations of technology.

Originality/value

This research arms industry, enterprises and organizations with the concise and comprehensive technical-knowledge about the capability of current hardware technology trends in solving Big Data problems. It also highlights the areas of potential research and immediate attention which researchers can exploit to explore new ideas and existing practices.

Details

Industrial Management & Data Systems, vol. 115 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Book part
Publication date: 11 August 2017

Valentin Cojanu

This chapter contributes to the conceptual effort to find an ‘encompassing framework’ to understand the rugged landscape of territorial development. A paradigmatic shift…

Abstract

This chapter contributes to the conceptual effort to find an ‘encompassing framework’ to understand the rugged landscape of territorial development. A paradigmatic shift is in need to reflect the gains from trade increasingly as a result of territorial communality rather than market optimality.

This contribution reviews first the tenets of the core-periphery models premised on three interpretations of space, that is, uniform-abstract space, diversified-relational space and uniform-stylised space. The conventional (spatial) models of peripherality are increasingly questionable when considering the relevance of more appropriate ‘aspatial’ concepts for understanding the conditions for growth and development across territories.

The conclusions emphasise the need to drop the norm of a universal policy related to a space of development divided in advanced and lagging areas. The implications range from re-stating the unit of analysis to re-stating the role of policy coordination in a multi-core integration environment.

This chapter attempts to evade the ‘illusion’ of the coincidence of political space with economic and human space. We aim at gaining ground towards a framework of analysing development that substitutes relational specificity of local economies for uniform territories of aggregate socio-economic features.

Details

Core-Periphery Patterns Across the European Union
Type: Book
ISBN: 978-1-78714-495-8

Keywords

To view the access options for this content please click here
Article
Publication date: 11 February 2021

Yongxing Guo, Min Chen, Li Xiong, Xinglin Zhou and Cong Li

The purpose of this study is to present the state of the art for fiber Bragg grating (FBG) acceleration sensing technologies from two aspects: the principle of the…

Abstract

Purpose

The purpose of this study is to present the state of the art for fiber Bragg grating (FBG) acceleration sensing technologies from two aspects: the principle of the measurement dimension and the principle of the sensing configuration. Some commercial sensors have also been introduced and future work in this field has also been discussed. This paper could provide an important reference for the research community.

Design/methodology/approach

This review is to present the state of the art for FBG acceleration sensing technologies from two aspects: the principle of the measurement dimension (one-dimension and multi-dimension) and the principle of the sensing configuration (beam type, radial vibration type, axial vibration type and other composite structures).

Findings

The current research on developing FBG acceleration sensors is mainly focused on the sensing method, the construction and design of the elastic structure and the design of a new information detection method. This paper hypothesizes that in the future, the following research trends will be strengthened: common single-mode fiber grating of the low cost and high utilization rate; high sensitivity and strength special fiber grating; multi-core fiber grating for measuring single-parameter multi-dimensional information or multi-parameter information; demodulating equipment of low cost, small volume and high sampling frequency.

Originality/value

The principle of the measurement dimension and principle of the sensing configuration for FBG acceleration sensors have been introduced, which could provide an important reference for the research community.

Details

Sensor Review, vol. 41 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 169