Search results

1 – 10 of over 10000
Article
Publication date: 14 March 2019

Lin Fu, Zhe Ji, Xiangyu Y. Hu and Nikolaus A. Adams

This paper aims to develop a parallel fast neighbor search method and communication strategy for particle-based methods with adaptive smoothing-length on distributed-memory…

Abstract

Purpose

This paper aims to develop a parallel fast neighbor search method and communication strategy for particle-based methods with adaptive smoothing-length on distributed-memory computing systems.

Design/methodology/approach

With a multi-resolution-based hierarchical data structure, the parallel neighbor search method is developed to detect and construct ghost buffer particles, i.e. neighboring particles on remote processor nodes. To migrate ghost buffer particles among processor nodes, an undirected graph is established to characterize the sparse data communication relation and is dynamically recomposed. By the introduction of an edge coloring algorithm from graph theory, the complex sparse data exchange can be accomplished within optimized frequency. For each communication substep, only efficient nonblocking point-to-point communication is involved.

Findings

Two demonstration scenarios are considered: fluid dynamics based on smoothed-particle hydrodynamics with adaptive smoothing-length and a recently proposed physics-motivated partitioning method [Fu et al., JCP 341 (2017): 447-473]. Several new concepts are introduced to recast the partitioning method into a parallel version. A set of numerical experiments is conducted to demonstrate the performance and potential of the proposed parallel algorithms.

Originality/value

The proposed methods are simple to implement in large-scale parallel environment and can handle particle simulations with arbitrarily varying smoothing-lengths. The implemented smoothed-particle hydrodynamics solver has good parallel performance, suggesting the potential for other scientific applications.

Article
Publication date: 29 July 2014

Qiongxiong Ma and Tie Zhang

Background subtraction is a particularly popular foreground detection method, whose background model can be updated by using input images. However, foreground object cannot be…

Abstract

Purpose

Background subtraction is a particularly popular foreground detection method, whose background model can be updated by using input images. However, foreground object cannot be detected accurately if the background model is broken. In order to improve the performance of foreground detection in human-robot interaction (HRI), the purpose of this paper is to propose a new background subtraction method based on image parameters, which helps to improve the robustness of the existing background subtraction method.

Design/methodology/approach

The proposed method evaluates the image and foreground results according to the image parameters representing the change features of the image. It ignores the image that is similar to the first image and the previous image in image sequence, filters the image that may break the background model and detects the abnormal background model. The method also helps to rebuild the background model when the model is broken.

Findings

Experimental results of typical interaction scenes validate that the proposed method helps to reduce the broken probability of background model and improve the robustness of background subtraction.

Research limitations/implications

Different threshold values of image parameters may affect the results in different environments. Future researches should focus on the automatic selection of parameters’ threshold values according to the interaction scene.

Practical implications

A useful method for foreground detection in HRI.

Originality/value

This paper proposes a method which employs image parameters to improve the robustness of the background subtraction for foreground detection in HRI.

Article
Publication date: 14 August 2017

Padmavati Shrivastava, K.K. Bhoyar and A.S. Zadgaonkar

The purpose of this paper is to build a classification system which mimics the perceptual ability of human vision, in gathering knowledge about the structure, content and the…

Abstract

Purpose

The purpose of this paper is to build a classification system which mimics the perceptual ability of human vision, in gathering knowledge about the structure, content and the surrounding environment of a real-world natural scene, at a quick glance accurately. This paper proposes a set of novel features to determine the gist of a given scene based on dominant color, dominant direction, openness and roughness features.

Design/methodology/approach

The classification system is designed at two different levels. At the first level, a set of low level features are extracted for each semantic feature. At the second level the extracted features are subjected to the process of feature evaluation, based on inter-class and intra-class distances. The most discriminating features are retained and used for training the support vector machine (SVM) classifier for two different data sets.

Findings

Accuracy of the proposed system has been evaluated on two data sets: the well-known Oliva-Torralba data set and the customized image data set comprising of high-resolution images of natural landscapes. The experimentation on these two data sets with the proposed novel feature set and SVM classifier has provided 92.68 percent average classification accuracy, using ten-fold cross validation approach. The set of proposed features efficiently represent visual information and are therefore capable of narrowing the semantic gap between low-level image representation and high-level human perception.

Originality/value

The method presented in this paper represents a new approach for extracting low-level features of reduced dimensionality that is able to model human perception for the task of scene classification. The methods of mapping primitive features to high-level features are intuitive to the user and are capable of reducing the semantic gap. The proposed feature evaluation technique is general and can be applied across any domain.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 13 June 2008

Chih‐Fong Tsai and David C. Yen

Image classification or more specifically, annotating images with keywords is one of the important steps during image database indexing. However, the problem with current research…

Abstract

Purpose

Image classification or more specifically, annotating images with keywords is one of the important steps during image database indexing. However, the problem with current research in terms of image retrieval is more concentrated on how conceptual categories can be well represented by extracted, low level features for an effective classification. Consequently, image features representation including segmentation and low‐level feature extraction schemes must be genuinely effective to facilitate the process of classification. The purpose of this paper is to examine the effect on annotation effectiveness of using different (local) feature representation methods to map into conceptual categories.

Design/methodology/approach

This paper compares tiling (five and nine tiles) and regioning (five and nine regions) segmentation schemes and the extraction of combinations of color, texture, and edge features in terms of the effectiveness of a particular benchmark, automatic image annotation set up. Differences between effectiveness on concrete or abstract conceptual categories or keywords are further investigated, and progress towards establishing a particular benchmark approach is also reported.

Findings

In the context of local feature representation, the paper concludes that the combined color and texture features are the best to use for the five tiling and regioning schemes, and this evidence would form a good benchmark for future studies. Another interesting finding (but perhaps not surprising) is that when the number of concrete and abstract keywords increases or it is large (e.g. 100), abstract keywords are more difficult to assign correctly than the concrete ones.

Research limitations/implications

Future work could consider: conduct user‐centered evaluation instead of evaluation only by some chosen ground truth dataset, such as Corel, since this might impact effectiveness results; use of different numbers of categories for scalability analysis of image annotation as well as larger numbers of training and testing examples; use of Principle Component Analysis or Independent Component Analysis, or indeed machine learning techniques for low‐level feature selection; use of other segmentation schemes, especially more complex tiling schemes and other regioning schemes; use of different datasets, use of other low‐level features and/or combination of them; use of other machine learning techniques.

Originality/value

This paper is a good start for analyzing the mapping between some feature representation methods and various conceptual categories for future image annotation research.

Details

Library Hi Tech, vol. 26 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 13 August 2018

Kai Victor Hansen, Christina Tølbøl Frøiland and Ingelin Testad

The Porcelain for All project was an initiative by Figgjo AS, a porcelain factory in Norway, which needed more research on different coloured porcelains. The paper aims to discuss…

Abstract

Purpose

The Porcelain for All project was an initiative by Figgjo AS, a porcelain factory in Norway, which needed more research on different coloured porcelains. The paper aims to discuss this issue.

Design/methodology/approach

The study aimed to gain new knowledge about how different décor and dinner plate colours can positively influence dementia sufferer food intake and appetite. The intervention period lasted three weeks. Four days were randomly picked during that period. Each plate was photographed before and after the resident had eaten, researchers conducted observations during mealtimes. Two CurroCus® group interviews were used to collect additional empirical data. In total, 12 dementia sufferers (five females) between 65 and 85 years were observed during dinnertime.

Findings

Plates with a white well, yellow lip and red rim seemed to be preferred regarding food intake. Three main categories were noted from the observations and group interviews: mealtime dignity, porcelain design and appetite.

Research limitations/implications

Future research could incorporate well-being in people with dementia regarding food weight, testing different meal room environments, user involvement, food presentation and should include more nursing homes and residents.

Practical implications

This study only encompasses a small sample (12 residents), all diagnosed with dementia.

Social implications

Outcomes may help to prevent undernutrition among elderly people.

Originality/value

Combined coloured porcelain, food intake and residents with dementia is scarcely investigated.

Details

International Journal of Health Care Quality Assurance, vol. 31 no. 7
Type: Research Article
ISSN: 0952-6862

Keywords

Article
Publication date: 10 September 2021

Xueping Wang and Xinqin Gao

The engineering education accreditation (EEA) is a principal quality assurance mechanism. However, at many education institutions, the most labor-intensive work of EEA process is…

2017

Abstract

Purpose

The engineering education accreditation (EEA) is a principal quality assurance mechanism. However, at many education institutions, the most labor-intensive work of EEA process is accomplished manually. Without the support of computer and information technology, the EEA process leads to high labor intensity, low work efficiency and poor management level. The purpose of this paper is to build a complex network model and realize an information management system of talent training program for supporting the EEA process.

Design/methodology/approach

Based on polychromatic graph (PG), this paper builds a network model of talent training program for engineering specialty. The related information and data are organized and processed in this network model. From the bidirections of top-down and bottom-up, the user requirements are retrieved automatically in logic layer. Together with the specialty of mechanical engineering, the proposed PG-based network modeling method is applied and the corresponding information management system is realized.

Findings

The study results show that the PG-based network modeling method takes full advantages of the strong simulation ability of PG to model the complex network system and has some unique merits in formal expression of problem, efficient processing of information and lightweight realization of system. Further, the information management system of talent training program can reduce the tedious human labor and improve the management level of EEA process dramatically.

Originality/value

This paper proposes a PG-based network modeling method, in which the nodes and the edges can be painted by some unified colors to describe the different kinds of activities and the various types of interactions. Theoretically, this modeling method does not distinguish the activities, the interactions and their properties in graphic symbol and the problem size is diminished about a half. Furthermore, this paper provides an effective experience and idea to the education institutions for implementing the engineering education accreditation, increasing the education management efficiency and promoting the talent training quality.

Article
Publication date: 8 October 2019

Akarsh Aggarwal, Anuj Rani and Manoj Kumar

The purpose of this paper is to explore the challenges faced by the automatic recognition systems over the conventional systems by implementing a novel approach for detecting and…

Abstract

Purpose

The purpose of this paper is to explore the challenges faced by the automatic recognition systems over the conventional systems by implementing a novel approach for detecting and recognizing the vehicle license plates in order to increase the security of the vehicles. This will also increase the societal discipline among vehicle users.

Design/methodology/approach

From a methodological point of view, the proposed system works in three phases which includes the pre-processing of the input image from the database, applying segmentation to the processed image, and finally extracting and recognizing the image of the license plate.

Findings

The proposed paper provides an analysis that demonstrates the correctness of the algorithm to correctly capture the license plate using performance metrics such as detection rate and false positive rate. The obtained results demonstrate that the proposed algorithm detects vehicle license plates and provides detection rate of 93.34 percent with false positive rate of 6.65 percent.

Research limitations/implications

The proposed license plate detection system eliminates the need of manually used systems for managing the traffic by installing the toll-booths on freeways and bridges. The design implemented in this paper attempts to capture the license plate by using three phase detection process that helps to increase the level of security and contribute in making a sustainable city.

Originality/value

This paper presents a distinctive approach to detect the license plate of the vehicles using the various image processing techniques such as dilation, grey-scale conversion, edge processing, etc. and finding the region of interest of the segmented image to capture the license plate of the vehicles.

Details

Smart and Sustainable Built Environment, vol. 9 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 4 November 2020

Pachayappan Murugaiyan and Venkatesakumar Ramakrishnan

Little attention has been paid to restructuring existing massive amounts of literature data such that evidence-based meaningful inferences and networks be drawn therefrom. This…

331

Abstract

Purpose

Little attention has been paid to restructuring existing massive amounts of literature data such that evidence-based meaningful inferences and networks be drawn therefrom. This paper aims to structure extant literature data into a network and demonstrate by graph visualization and manipulation tool “Gephi” how to obtain an evidence-based literature review.

Design/methodology/approach

The main objective of this paper is to propose a methodology to structure existing literature data into a network. This network is examined through certain graph theory metrics to uncover evidence-based research insights arising from existing huge amounts of literature data. From the list metrics, this study considers degree centrality, closeness centrality and betweenness centrality to comprehend the information available in the literature pool.

Findings

There is a significant amount of literature on any given research problem. Approaching this massive volume of literature data to find an appropriate research problem is a complicated process. The proposed methodology and metrics enable the extraction of appropriate and relevant information from huge quantities of literature data. The methodology is validated by three different scenarios of review questions, and results are reported.

Research limitations/implications

The proposed methodology comprises of more manual hours to structure literature data.

Practical implications

This paper enables researchers in any domain to systematically extract and visualize meaningful and evidence-based insights from existing literature.

Originality/value

The procedure for converting literature data into a network representation is not documented in the existing literature. The paper lays down the procedure to structure literature data into a network.

Details

Journal of Modelling in Management, vol. 17 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 1 November 2000

Alistair Ross

The study surveys and delineates the processes involved in screen‐based information design. This is specifically in relation to the creation of electronic forms and from this…

334

Abstract

The study surveys and delineates the processes involved in screen‐based information design. This is specifically in relation to the creation of electronic forms and from this offers a guide to their production. The study also examines the design and technological issues associated with the transfer, or translation, of the printed form to the computer screen. How an Eform might be made more visually engaging without detracting from the information relevant to the form’s navigation and completion. Also, the interaction between technology and (document) structure where technology can eliminate or reduce traditional structural problems through the application of non‐linear strategies. It reviews the potential solutions of incorporating improved functionality through interactivity.

Details

Aslib Proceedings, vol. 52 no. 9
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 6 February 2017

Oliver Krammer, Bertalan Varga and Karel Dušek

This paper aims to present a new method to calculate the appropriate volume of solder paste necessary for the pin-in-paste (PIP) technology. By the aid of this volume calculation…

Abstract

Purpose

This paper aims to present a new method to calculate the appropriate volume of solder paste necessary for the pin-in-paste (PIP) technology. By the aid of this volume calculation, correction factors have been determined, which can be used to correct the solder fillet volume obtained by an explicit expression.

Design/methodology/approach

The method is based on calculating the optimal solder fillet shape and profile for through-hole (TH) components with given geometrical sizes. To calculate this optimal shape of the fillet, a script was written in Surface Evolver. The volume calculations were performed for different fillet radiuses (0.4-1.2 mm) and for different component lead geometries (circular and square cross-sections). Finally, the volume obtained by the Evolver calculations was divided by the volume obtained by an explicit expression, and correction factors were determined for the varying parameters.

Findings

The results showed that the explicit expression underestimates the fillet volume necessary for the PIP technology significantly (15-35 per cent). The correction factors for components with circular leads ranged between 1.4 and 1.59, whereas the correction factors for square leads ranged between 1.1 and 1.27. Applying this correction can aid in depositing the appropriate solder paste volume for TH components.

Originality/value

Determining the correct volume of solder paste necessary for the PIP technology is crucial to eliminate the common soldering failure of TH components (e.g. voiding or non-wetted solder pads). The explicit expression, which is widely used for volume calculation in this field, underestimates the necessary volume significantly. The new method can correct this estimation, and can aid the industry to approach zero-defect manufacturing in the PIP technology.

Details

Soldering & Surface Mount Technology, vol. 29 no. 1
Type: Research Article
ISSN: 0954-0911

Keywords

1 – 10 of over 10000