Search results

1 – 10 of 553
Article
Publication date: 4 July 2022

Junyao Wang, Xingyu Chen, Huan Liu, Gongchen Sun, Yunpeng Li, Bowen Cui, Tianhong Lang, Rui Wang, Yiying Zhang and Maocheng Mao Sun

The purpose of this study is to provide a micro-nano chip automatic alignment system. Used for micron and nanometer channel alignment of microfluidic chip.

Abstract

Purpose

The purpose of this study is to provide a micro-nano chip automatic alignment system. Used for micron and nanometer channel alignment of microfluidic chip.

Design/methodology/approach

In this paper, combined with the reconstructed micro–nanoscale Hough transform theory, a “clamp–adsorb–rotate” chip alignment method is proposed. The designed alignment system includes a microscopic identification device, a clamping device and a suction device. After assembly, the straightness of the linear slide rail in the horizontal and vertical directions was tested, respectively. The results show that in the horizontal and vertical directions, the linearity error of the linear slide is +0.29 and 0.30 µm, respectively, which meets the requirement of chip alignment accuracy of 15 µm. In the direction of rotation, the angular error between the microchannel and the nanochannel is ±0.5°. In addition, an alignment flow experiment of the chip is designed. The results demonstrate that the closer the angle between the microchannel and the nanochannel is to 90°, the fluid fills the entire channel. Compared with the conventional method, the method and the assembly system realize fully automatic double-layer chip alignment.

Findings

A mechanical device designed by Hough transform theory can realize microfluidic chip alignment at nanometer and micron level.

Originality/value

The automatic alignment device adopts Hough transform principle and can be used for microfluidic chip alignment.

Details

Sensor Review, vol. 42 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 3 May 2010

Kemal Kaplan, Caner Kurtul and H. Levent Akin

Lane tracking is one of the most important processes for autonomous vehicles because the navigable region usually stands between the lanes, especially in urban environments. A…

Abstract

Purpose

Lane tracking is one of the most important processes for autonomous vehicles because the navigable region usually stands between the lanes, especially in urban environments. A robust lane tracking method is also required for reducing the effect of the noise and the required processing time. The purpose of this paper is to present a new lane tracking method.

Design/methodology/approach

A new lane tracking method is presented which uses a partitioning technique for obtaining multiresolution Hough transform of the acquired vision data where Hough transform is one of the most popular algorithms for lane detection. After the detection process, for tracking the detected lanes, a hidden Markov model (HMM) based method is proposed.

Findings

The results of the proposed approach show that the partitioned Hough transformation reduces the effect of noise and provides robust lane tracking. In addition, the acquired lanes are successfully tracked by using the designed HMM.

Originality/value

This paper provides a fast lane tracking system which can be integrated with an autonomous vehicle or a driver assistance system.

Details

Industrial Robot: An International Journal, vol. 37 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2012

Zhi‐jie Dong, Feng Ye, Di Li and Jie‐xian Huang

The purpose of this paper is to study the application of feature‐based image matching algorithm for PCB matching without using special fiducial marks.

Abstract

Purpose

The purpose of this paper is to study the application of feature‐based image matching algorithm for PCB matching without using special fiducial marks.

Design/methodology/approach

Speed‐up robust feature (SURF) is applied to extract the interest points in PCB images. An advanced threshold is set to reject the interest points with low responses to speed up feature computation. In order to improve the performance for rotation, the descriptors are based on multi‐orientations. The many‐to‐many tentative correspondences are determined with a maximum distance. Hough transform is used to reject the mismatches and the affine parameters are computed with a square‐least solution.

Findings

Results show that the method proposed in this paper can match the PCB images without using special fiducial marks effectively. The image matching algorithm shows a better performance for image rotation than the standard SURF and it succeeds in matching the image including repetitive patterns which will deteriorate the distinctiveness of feature descriptors.

Research limitations/implications

Additional orientations produce more descriptors so that it takes extra time for feature description and matching.

Originality/value

The paper proposes a SURF‐based image matching algorithm to match the PCB images without special fiducial marks. This can reduce the complexity of PCB production. The image matching algorithm is robust to image rotation and repetitive patterns and can be used in other applications of image matching.

Article
Publication date: 1 March 1991

Roy Davies

The design of vision algorithms for industrial applications is often considered to be an artform. In this article Roy Davies demonstrates that it can be a science.

Abstract

The design of vision algorithms for industrial applications is often considered to be an artform. In this article Roy Davies demonstrates that it can be a science.

Details

Sensor Review, vol. 11 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 12 July 2013

Chandana P. Dinesh, Abdul U. Bari, Ranjith P.G. Dissanayake and Mazayuki Tamura

The purpose of this paper is to present a method and results of evaluating damaged building extraction using an object recognition task in pre‐ and post‐tsunami event. The…

Abstract

Purpose

The purpose of this paper is to present a method and results of evaluating damaged building extraction using an object recognition task in pre‐ and post‐tsunami event. The advantage of remote sensing and its applications made it possible to extract damaged building images and vulnerability easement of wide urban areas due to natural disasters.

Design/methodology/approach

The proposed approach involves several advanced morphological operators, among which are adaptive transforms with varying size, shape and grey level of the structuring elements. IKONOS‐2 satellite images consisting of pre‐ and post‐2004 Indian Ocean Tsunami site of the Kalmunai area on the East coast of Sri Lanka were used. Morphological operation using structural element are applied for segmented images, then extracted remaining building foot print using random forest classification method. This work extended further the road lines extraction using Hough transform.

Findings

The result was investigated using geographic information system (GIS) data and global positioning system (GPS) ground survey in the field and it appeared to have high accuracy: the confidence measures produced of a completely destroyed structure give 86 percent by object‐based, respectively, after the tsunami in one segment of Maruthamune GN Division.

Research limitations/implications

This study has also identified significant limitations, due to the resolution and clearness of satellite images and vegetation canopy over the building footprint.

Originality/value

The authors develop an automated method to detect damaged buildings and compare the results with GIS‐based ground survey.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 4 no. 2
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 18 January 2016

Huajun Liu, Cailing Wang and Jingyu Yang

– This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Abstract

Purpose

This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Design/methodology/approach

The scheme proposed here includes two main stages: VPs estimation and lane identification. VPs estimation based on vanishing direction hypothesis and Bayesian posterior probability estimation in the image Hough space is a foremost contribution, and then VPs are estimated through an optimal objective function. In lane identification stage, the selected linear samples supervised by estimated VPs are clustered based on the gradient direction of linear features to separate lanes, and finally all the lanes are identified through an identification function.

Findings

The scheme and algorithms are tested on real data sets collected from an intelligent vehicle. It is more efficient and more accurate than recent similar methods for structured road, and especially multiple VPs identification and estimation of branch road can be achieved and lanes of branch road can be identified for complex scenarios based on Bayesian posterior probability verification framework. Experimental results demonstrate VPs, and lanes are practical for challenging structured and semi-structured complex road scenarios.

Originality/value

A Bayesian posterior probability verification framework is proposed to estimate multiple VPs and corresponding lanes for road scene understanding of structured or semi-structured road monocular images on intelligent vehicles.

Details

Industrial Robot: An International Journal, vol. 43 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 November 2018

Ning Zhang, Ruru Pan, Lei Wang, Shanshan Wang, Jun Xiang and Weidong Gao

The purpose of this paper is to propose a novel method using support vector machine (SVM) classifiers for objective seam pucker evaluation. Features are extracted using wavelet…

Abstract

Purpose

The purpose of this paper is to propose a novel method using support vector machine (SVM) classifiers for objective seam pucker evaluation. Features are extracted using wavelet analysis and gray-level co-occurrence matrix (GLCM), and the samples are evaluated using SVM classifiers. The study aims to solve the problem of inappropriate parameters and large required samples in objective seam pucker evaluation.

Design/methodology/approach

Initially, seam pucker image was captured, and Edge detection and Hough transform were utilized to normalize the seam position and orientation. After cropping the image, the intensity was adjusted to the same identical level through histogram specification. Then, the standard deviations of the horizontal image and diagonal image, reconstructed using wavelet decomposition and reconstruction, were calculated based on parameter optimization. Meanwhile, GLCM was extracted from the restructured horizontal detail image, then the contrast and correlation of GLCM were calculated. Finally, these four features were imported to SVM classifiers based on genetic algorithm for evaluation.

Findings

The four extracted features reflected linear relationships among five grades. The experimental results showed that the classification accuracy was 96 percent, which catches up to the performance of human vision, and resolves ambiguity and subjective of the manual evaluation.

Originality/value

There are large required samples in current research. This paper provides a novel method using finite samples, and the parameters of the methods were discussed for parameter optimization. The evaluation results can provide references for analyzing the reason of wrinkles during garment manufacturing.

Details

International Journal of Clothing Science and Technology, vol. 31 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 19 June 2017

Bo Sun, Yadan Zeng, Houde Dai, Junhao Xiao and Jianwei Zhang

This paper aims to present the spherical entropy image (SEI), a novel global descriptor for the scan registration of three-dimensional (3D) point clouds. This paper also…

Abstract

Purpose

This paper aims to present the spherical entropy image (SEI), a novel global descriptor for the scan registration of three-dimensional (3D) point clouds. This paper also introduces a global feature-less scan registration strategy based on SEI. It is advantageous for 3D data processing in the scenarios such as mobile robotics and reverse engineering.

Design/methodology/approach

The descriptor works through representing the scan by a spherical function named SEI, whose properties allow to decompose the six-dimensional transformation into 3D rotation and 3D translation. The 3D rotation is estimated by the generalized convolution theorem based on the spherical Fourier transform of SEI. Then, the translation recovery is determined by phase only matched filtering.

Findings

No explicit features and planar segments should be contained in the input data of the method. The experimental results illustrate the parameter independence, high reliability and efficiency of the novel algorithm in registration of feature-less scans.

Originality/value

A novel global descriptor (SEI) for the scan registration of 3D point clouds is presented. It inherits both descriptive power of signature-based methods and robustness of histogram-based methods. A high reliability and efficiency registration method of scans based on SEI is also demonstrated.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 January 2013

Chen Guodong, Zeyang Xia, Rongchuan Sun, Zhenhua Wang and Lining Sun

Detecting objects in images and videos is a difficult task that has challenged the field of computer vision. Most of the algorithms for object detection are sensitive to…

Abstract

Purpose

Detecting objects in images and videos is a difficult task that has challenged the field of computer vision. Most of the algorithms for object detection are sensitive to background clutter and occlusion, and cannot localize the edge of the object. An object's shape is typically the most discriminative cue for its recognition by humans. The purpose of this paper is to introduce a model‐based object detection method which uses only shape‐fragment features.

Design/methodology/approach

The object shape model is learned from a small set of training images and all object models are composed of shape fragments. The model of the object is in multi‐scales.

Findings

The major contributions of this paper are the application of learned shape fragments‐based model for object detection in complex environment and a novel two‐stage object detection framework.

Originality/value

The results presented in this paper are competitive with other state‐of‐the‐art object detection methods.

1 – 10 of 553