Search results

1 – 10 of 615
Article
Publication date: 12 October 2018

Beichuan Yan and Richard Regueiro

This paper aims to present performance comparison between O(n2) and O(n) neighbor search algorithms, studies their effects for different particle shape complexity and computational

Abstract

Purpose

This paper aims to present performance comparison between O(n2) and O(n) neighbor search algorithms, studies their effects for different particle shape complexity and computational granularity (CG) and investigates the influence on superlinear speedup of 3D discrete element method (DEM) for complex-shaped particles. In particular, it aims to answer the question: O(n2) or O(n) neighbor search algorithm, which performs better in parallel 3D DEM computational practice?

Design/methodology/approach

The O(n2) and O(n) neighbor search algorithms are carefully implemented in the code paraEllip3d, which is executed on the Department of Defense supercomputers across five orders of magnitude of simulation scale (2,500; 12,000; 150,000; 1 million and 10 million particles) to evaluate and compare the performance, using both strong and weak scaling measurements.

Findings

The more complex the particle shapes (from sphere to ellipsoid to poly-ellipsoid), the smaller the neighbor search fraction (NSF); and the lower is the CG, the smaller is the NSF. In both serial and parallel computing of complex-shaped 3D DEM, the O(n2) algorithm is inefficient at coarse CG; however, it executes faster than O(n) algorithm at fine CGs that are mostly used in computational practice to achieve the best performance. This means that O(n2) algorithm outperforms O(n) in parallel 3D DEM generally.

Practical implications

Taking for granted that O(n) outperforms O(n2) unconditionally, complex-shaped 3D DEM is a misconception commonly encountered in the computational engineering and science literature.

Originality/value

The paper clarifies that performance of O(n2) and O(n) neighbor search algorithms for complex-shaped 3D DEM is affected by particle shape complexity and CG. In particular, the O(n2) algorithm outperforms the O(n) algorithm in large-scale parallel 3D DEM simulations generally, even though this outperformance is counterintuitive.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 16 April 2018

Beichuan Yan and Richard Regueiro

The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel computing on…

204

Abstract

Purpose

The purpose of this paper is to extend complex-shaped discrete element method simulations from a few thousand particles to millions of particles by using parallel computing on department of defense (DoD) supercomputers and to study the mechanical response of particle assemblies composed of a large number of particles in engineering practice and laboratory tests.

Design/methodology/approach

Parallel algorithm is designed and implemented with advanced features such as link-block, border layer and migration layer, adaptive compute gridding technique and message passing interface (MPI) transmission of C++ objects and pointers, for high performance optimization; performance analyses are conducted across five orders of magnitude of simulation scale on multiple DoD supercomputers; and three full-scale simulations of sand pluviation, constrained collapse and particle shape effect are carried out to study mechanical response of particle assemblies.

Findings

The parallel algorithm and implementation exhibit high speedup and excellent scalability, communication time is a decreasing function of the number of compute nodes and optimal computational granularity for each simulation scale is given. Nearly 50 per cent of wall clock time is spent on rebound phenomenon at the top of particle assembly in dynamic simulation of sand gravitational pluviation. Numerous particles are necessary to capture the pattern and shape of particle assembly in collapse tests; preliminary comparison between sphere assembly and ellipsoid assembly indicates a significant influence of particle shape on kinematic, kinetic and static behavior of particle assemblies.

Originality/value

The high-performance parallel code enables the simulation of a wide range of dynamic and static laboratory and field tests in engineering applications that involve a large number of granular and geotechnical material grains, such as sand pluviation process, buried explosion in various soils, earth penetrator interaction with soil, influence of grain size, shape and gradation on packing density and shear strength and mechanical behavior under different gravity environments such as on the Moon and Mars.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 23 February 2010

Mauro Onori and José Barata Oliveira

This roadmap is primarily concerned with the adaptive assembly technology situation in Europe, a topic of particular interest as assembly is often the final process within…

1145

Abstract

Purpose

This roadmap is primarily concerned with the adaptive assembly technology situation in Europe, a topic of particular interest as assembly is often the final process within manufacturing operations. Being the final set of operations on the product, and being traditionally labour‐intensive, assembly has been considerably affected by globalisation. Therefore, unlike most technology roadmaps, this report will not focus solely on particular technologies, but will strive to form a broader perspective on the conditions that may come to influence the opportunities, including political aspects and scientific paradigms. The purpose of this paper is to convey a complete view of the global mechanisms that may come to affect technological breakthroughs, and also present strategies that may better prepare for such a forecast.

Design/methodology/approach

The paper describes a technological roadmap.

Findings

This paper provides a complete overview of all aspects that may come to affect assembly in Europe within the next 20 years.

Originality/value

The paper gives an original Evolvable Ultra Precision Assembly Systems FP6 project result which will be of general interest for strategic R&D.

Details

Assembly Automation, vol. 30 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 7 August 2017

Hao Wang and Sanhong Deng

In the era of Big Data, network digital resources are growing rapidly, especially the short-text resources, such as tweets, comments, messages and so on, are showing a vigorous…

Abstract

Purpose

In the era of Big Data, network digital resources are growing rapidly, especially the short-text resources, such as tweets, comments, messages and so on, are showing a vigorous vitality. This study aims to compare the categories discriminative capacity (CDC) of Chinese language fragments with different granularities and to explore and verify feasibility, rationality and effectiveness of the low-granularity feature, such as Chinese characters in Chinese short-text classification (CSTC).

Design/methodology/approach

This study takes discipline classification of journal articles from CSSCI as a simulation environment. On the basis of sorting out the distribution rules of classification features with various granularities, including keywords, terms and characters, the classification effects accessed by the SVM algorithm are comprehensively compared and evaluated from three angles of using the same experiment samples, testing before and after feature optimization, and introducing external data.

Findings

The granularity of a classification feature has an important impact on CSTC. In general, the larger the granularity is, the better the classification result is, and vice versa. However, a low-granularity feature is also feasible, and its CDC could be improved by reasonable weight setting, even exceeding a high-granularity feature if synthetically considering classification precision, computational complexity and text coverage.

Originality/value

This is the first study to propose that Chinese characters are more suitable as descriptive features in CSTC than terms and keywords and to demonstrate that CDC of Chinese character features could be strengthened by mixing frequency and position as weight.

Article
Publication date: 22 June 2010

Imam Machdi, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a…

Abstract

Purpose

The purpose of this paper is to propose general parallelism techniques for holistic twig join algorithms to process queries against Extensible Markup Language (XML) databases on a multi‐core system.

Design/methodology/approach

The parallelism techniques comprised data and task parallelism. As for data parallelism, the paper adopted the stream‐based partitioning for XML to partition XML data as the basis of parallelism on multiple CPU cores. The XML data partitioning was performed in two levels. The first level was to create buckets for creating data independence and balancing loads among CPU cores; each bucket was assigned onto a CPU core. Within each bucket, the second level of XML data partitioning was performed to create finer partitions for providing finer parallelism. Each CPU core performed the holistic twig join algorithm on each finer partition of its own in parallel with other CPU cores. In task parallelism, the holistic twig join algorithm was decomposed into two main tasks, which were pipelined to create parallelism. The first task adopted the data parallelism technique and their outputs were transferred to the second task periodically. Since data transfers incurred overheads, the size of each data transfer needed to be estimated cautiously for achieving optimal performance.

Findings

The data and task parallelism techniques contribute to good performance especially for queries having complex structures and/or higher values of query selectivity. The performance of data parallelism can be further improved by task parallelism. Significant performance improvement is attained by queries having higher selectivity because more outputs computed by the second task is performed in parallel with the first task.

Research limitations/implications

The proposed parallelism techniques primarily deals with executing a single long‐running query for intra‐query parallelism, partitioning XML data on‐the‐fly, and allocating partitions on CPU cores statically. During the parallel execution, presumably there are no such dynamic XML data updates.

Practical implications

The effectiveness of the proposed parallel holistic twig joins relies fundamentally on some system parameter values that can be obtained from a benchmark of the system platform.

Originality/value

The paper proposes novel techniques to increase parallelism by combining techniques of data and task parallelism for achieving high performance. To the best of the author's knowledge, this is the first paper of parallelizing the holistic twig join algorithms on a multi‐core system.

Details

International Journal of Web Information Systems, vol. 6 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Abstract

Details

Integrated Land-Use and Transportation Models
Type: Book
ISBN: 978-0-080-44669-1

Open Access
Article
Publication date: 30 March 2023

Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…

Abstract

Purpose

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.

Design/methodology/approach

This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.

Findings

This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.

Originality/value

The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 4 July 2023

Maojian Chen, Xiong Luo, Hailun Shen, Ziyang Huang, Qiaojuan Peng and Yuqi Yuan

This study aims to introduce an innovative approach that uses a decoder with multiple layers to accurately identify Chinese nested entities across various nesting depths. To…

Abstract

Purpose

This study aims to introduce an innovative approach that uses a decoder with multiple layers to accurately identify Chinese nested entities across various nesting depths. To address potential human intervention, an advanced optimization algorithm is used to fine-tune the decoder based on the depth of nested entities present in the data set. With this approach, this study achieves remarkable performance in recognizing Chinese nested entities.

Design/methodology/approach

This study provides a framework for Chinese nested named entity recognition (NER) based on sequence labeling methods. Similar to existing approaches, the framework uses an advanced pre-training model as the backbone to extract semantic features from the text. Then a decoder comprising multiple conditional random field (CRF) algorithms is used to learn the associations between granularity labels. To minimize the need for manual intervention, the Jaya algorithm is used to optimize the number of CRF layers. Experimental results validate the effectiveness of the proposed approach, demonstrating its superior performance on both Chinese nested NER and flat NER tasks.

Findings

The experimental findings illustrate that the proposed methodology can achieve a remarkable 4.32% advancement in nested NER performance on the People’s Daily corpus compared to existing models.

Originality/value

This study explores a Chinese NER methodology based on the sequence labeling ideology for recognizing sophisticated Chinese nested entities with remarkable accuracy.

Details

International Journal of Web Information Systems, vol. 19 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 February 2017

Zhongyi Wang, Jin Zhang and Jing Huang

Current segmentation systems almost invariably focus on linear segmentation and can only divide text into linear sequences of segments. This suits cohesive text such as news feed…

Abstract

Purpose

Current segmentation systems almost invariably focus on linear segmentation and can only divide text into linear sequences of segments. This suits cohesive text such as news feed but not coherent texts such as documents of a digital library which have hierarchical structures. To overcome the focus on linear segmentation in document segmentation and to realize the purpose of hierarchical segmentation for a digital library’s structured resources, this paper aimed to propose a new multi-granularity hierarchical topic-based segmentation system (MHTSS) to decide section breaks.

Design/methodology/approach

MHTSS adopts up-down segmentation strategy to divide a structured, digital library document into a document segmentation tree. Specifically, it works in a three-stage process, such as document parsing, coarse segmentation based on document access structures and fine-grained segmentation based on lexical cohesion.

Findings

This paper analyzed limitations of document segmentation methods for the structured, digital library resources. Authors found that the combination of document access structures and lexical cohesion techniques should complement each other and allow for a better segmentation of structured, digital library resources. Based on this finding, this paper proposed the MHTSS for the structured, digital library resources. To evaluate it, MHTSS was compared to the TT and C99 algorithms on real-world digital library corpora. Through comparison, it was found that the MHTSS achieves top overall performance.

Practical implications

With MHTSS, digital library users can get their relevant information directly in segments instead of receiving the whole document. This will improve retrieval performance as well as dramatically reduce information overload.

Originality/value

This paper proposed MHTSS for the structured, digital library resources, which combines the document access structures and lexical cohesion techniques to decide section breaks. With this system, end-users can access a document by sections through a document structure tree.

Article
Publication date: 18 January 2021

Jayati Athavale, Minami Yoda and Yogendra Joshi

This study aims to present development of genetic algorithm (GA)-based framework aimed at minimizing data center cooling energy consumption by optimizing the cooling set-points…

337

Abstract

Purpose

This study aims to present development of genetic algorithm (GA)-based framework aimed at minimizing data center cooling energy consumption by optimizing the cooling set-points while ensuring that thermal management criteria are satisfied.

Design/methodology/approach

Three key components of the developed framework include an artificial neural network-based model for rapid temperature prediction (Athavale et al., 2018a, 2019), a thermodynamic model for cooling energy estimation and GA-based optimization process. The static optimization framework informs the IT load distribution and cooling set-points in the data center room to simultaneously minimize cooling power consumption while maximizing IT load. The dynamic framework aims to minimize cooling power consumption in the data center during operation by determining most energy-efficient set-points for the cooling infrastructure while preventing temperature overshoots.

Findings

Results from static optimization framework indicate that among the three levels (room, rack and row) of IT load distribution granularity, Rack-level distribution consumes the least cooling power. A test case of 7.5 h implementing dynamic optimization demonstrated a reduction in cooling energy consumption between 21%–50% depending on current operation of data center.

Research limitations/implications

The temperature prediction model used being data-driven, is specific to the lab configuration considered in this study and cannot be directly applied to other scenarios. However, the overall framework can be generalized.

Practical implications

The developed framework can be implemented in data centers to optimize operation of cooling infrastructure and reduce energy consumption.

Originality/value

This paper presents a holistic framework for improving energy efficiency of data centers which is of critical value given the high (and increasing) energy consumption by these facilities.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 31 no. 10
Type: Research Article
ISSN: 0961-5539

Keywords

1 – 10 of 615