Search results

1 – 10 of 146
Article
Publication date: 25 February 2014

Yin-Tien Wang, Chen-Tung Chi and Ying-Chieh Feng

To build a persistent map with visual landmarks is one of the most important steps for implementing the visual simultaneous localization and mapping (SLAM). The corner detector is…

206

Abstract

Purpose

To build a persistent map with visual landmarks is one of the most important steps for implementing the visual simultaneous localization and mapping (SLAM). The corner detector is a common method utilized to detect visual landmarks for constructing a map of the environment. However, due to the scale-variant characteristic of corner detection, extensive computational cost is needed to recover the scale and orientation of corner features in SLAM tasks. The purpose of this paper is to build the map using a local invariant feature detector, namely speeded-up robust features (SURF), to detect scale- and orientation-invariant features as well as provide a robust representation of visual landmarks for SLAM.

Design/methodology/approach

SURF are scale- and orientation-invariant features which have higher repeatability than that obtained by other detection methods. Furthermore, SURF algorithms have better processing speed than other scale-invariant detection method. The procedures of detection, description and matching of regular SURF algorithms are modified in this paper in order to provide a robust representation of visual landmarks in SLAM. The sparse representation is also used to describe the environmental map and to reduce the computational complexity in state estimation using extended Kalman filter (EKF). Furthermore, the effective procedures of data association and map management for SURF features in SLAM are also designed to improve the accuracy of robot state estimation.

Findings

Experimental works were carried out on an actual system with binocular vision sensors to prove the feasibility and effectiveness of the proposed algorithms. EKF SLAM with the modified SURF algorithms was applied in the experiments including the evaluation of accurate state estimation as well as the implementation of large-area SLAM. The performance of the modified SURF algorithms was compared with those obtained by regular SURF algorithms. The results show that the SURF with less-dimensional descriptors is the most suitable representation of visual landmarks. Meanwhile, the integrated system is successfully validated to fulfill the capabilities of visual SLAM system.

Originality/value

The contribution of this paper is the novel approach to overcome the problem of recovering the scale and orientation of visual landmarks in SLAM tasks. This research also extends the usability of local invariant feature detectors in SLAM tasks by utilizing its robust representation of visual landmarks. Furthermore, data association and map management designed for SURF-based mapping in this paper also give another perspective for improving the robustness of SLAM systems.

Details

Engineering Computations, vol. 31 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 May 2020

Farid Esmaeili, Hamid Ebadi, Mohammad Saadatseresht and Farzin Kalantary

Displacement measurement in large-scale structures (such as excavation walls) is one of the most important applications of close-range photogrammetry, in which achieving high…

Abstract

Purpose

Displacement measurement in large-scale structures (such as excavation walls) is one of the most important applications of close-range photogrammetry, in which achieving high precision requires extracting and accurately matching local features from convergent images. The purpose of this study is to introduce a new multi-image pointing (MIP) algorithm is introduced based on the characteristics of the geometric model generated from the initial matching. This self-adaptive algorithm is used to correct and improve the accuracy of the extracted positions from local features in the convergent images.

Design/methodology/approach

In this paper, the new MIP algorithm based on the geometric characteristics of the model generated from the initial matching was introduced, which in a self-adaptive way corrected the extracted image coordinates. The unique characteristics of this proposed algorithm were that the position correction was accomplished with the help of continuous interaction between the 3D model coordinates and the image coordinates and that it had the least dependency on the geometric and radiometric nature of the images. After the initial feature extraction and implementation of the MIP algorithm, the image coordinates were ready for use in the displacement measurement process. The combined photogrammetry displacement adjustment (CPDA) algorithm was used for displacement measurement between two epochs. Micro-geodesy, target-based photogrammetry and the proposed MIP methods were used in a displacement measurement project for an excavation wall in the Velenjak area in Tehran, Iran, to evaluate the proposed algorithm performance. According to the results, the measurement accuracy of the point geo-coordinates of 8 mm and the displacement accuracy of 13 mm could be achieved using the MIP algorithm. In addition to the micro-geodesy method, the accuracy of the results was matched by the cracks created behind the project’s wall. Given the maximum allowable displacement limit of 4 cm in this project, the use of the MIP algorithm produced the required accuracy to determine the critical displacement in the project.

Findings

Evaluation of the results demonstrated that the accuracy of 8 mm in determining the position of the points on the feature and the accuracy of 13 mm in the displacement measurement of the excavation walls could be achieved using precise positioning of local features on images using the MIP algorithm.The proposed algorithm can be used in all applications that need to achieve high accuracy in determining the 3D coordinates of local features in close-range photogrammetry.

Originality/value

Some advantages of the proposed MIP photogrammetry algorithm, including the ease of obtaining observations and using local features on the structure in the images rather than installing the artificial targets, make it possible to effectively replace micro-geodesy and instrumentation methods. In addition, the proposed MIP method is superior to the target-based photogrammetric method because it does not need artificial target installation and protection. Moreover, in each photogrammetric application that needs to determine the exact point coordinates on the feature, the proposed algorithm can be very effective in providing the possibility to achieve the required accuracy according to the desired objectives.

Article
Publication date: 19 June 2017

Qian Sun, Ming Diao, Yibing Li and Ya Zhang

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Abstract

Purpose

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Design/methodology/approach

The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings.

Findings

The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper.

Practical implications

This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation.

Originality/value

A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 June 2017

Chen-Chien Hsu, Cheng-Kai Yang, Yi-Hsing Chien, Yin-Tien Wang, Wei-Yen Wang and Chiang-Heng Chien

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases…

Abstract

Purpose

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms.

Design/methodology/approach

As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates.

Findings

Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy.

Originality/value

The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 26 August 2014

Xing Wang, Zhenfeng Shao, Xiran Zhou and Jun Liu

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information…

Abstract

Purpose

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information acquisition technologies, more complex objects appear in high-resolution remote sensing images. Traditional visual features are no longer precise enough to describe the images.

Design/methodology/approach

A novel remote sensing image retrieval method based on VSP (visual salient point) features is proposed in this paper. A key point detector and descriptor are used to extract the critical features and their descriptors in remote sensing images. A visual attention model is adopted to calculate the saliency map of the images, separating the salient regions from the background in the images. The key points in the salient regions are then extracted and defined as VSPs. The VSP features can then be constructed. The similarity between images is measured using the VSP features.

Findings

According to the experiment results, compared with traditional visual features, VSP features are more precise and stable in representing diverse remote sensing images. The proposed method performs better than the traditional methods in image retrieval precision.

Originality/value

This paper presents a novel remote sensing image retrieval method based on VSP features.

Details

Sensor Review, vol. 34 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 2 October 2009

Ioannis G. Mariolis and Evangelos S. Dermatas

The purpose of this paper is to provide a robust method for automatic detection of seam lines based only on digital images of the garments.

Abstract

Purpose

The purpose of this paper is to provide a robust method for automatic detection of seam lines based only on digital images of the garments.

Design/methodology/approach

A local standard deviation pre‐processing filter is applied to enhance the contrast between the seam line and the texture and the Prewitt operator extracts the edges of the enhanced image. The seam line is detected by a maximum at the Radon transform. The proposed method is invariant to the illumination intensity and it has been also tested with moving average and fast Fourier transform low‐pass filters used in the pre‐processing module. Extensive experiments are carried out in the presence of additive Gaussian and uniform noise.

Findings

The proposed method detects 109 out of 118 seams when the local standard deviation is used at the pre‐processing stage, giving a mean distance error between the real and the estimated line of 2 mm when the image is digitised at 97 dpi. However, in case the images are distorted by additive Gaussian noise at 20 dB signal‐to‐noise ratio, the moving average low‐pass filtering method gives the best results, detecting 104 noisy images.

Research limitations/implications

The proposed method detects seam lines that can be approximated by a continuation of straight lines. The current work can be extended in the detection of the curved parts of seam lines.

Practical implications

Since the method addresses garments instead of seam specimens, the proposed approach can be imported in automatic systems for online quality control of seams.

Originality/value

Local standard deviation belongs to first‐order statistics, which makes it suitable for texture analysis and that is why it is mostly used in web defect detection. The novelty in the approach, however, is that by considering the seam as an abnormality of the texture, the authors applied that method at the pre‐processing stage to enhance the seam before the detection. Moreover, the presented method is illumination invariant, a property that has not been addressed in similar methods.

Details

International Journal of Clothing Science and Technology, vol. 21 no. 5
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 9 September 2014

Wen-Yang Chang and Chih-Ping Tsai

This study aims to investigate the spectral illumination characteristics and geometric features of bicycle parts and proposes an image stitching method for their automatic visual…

Abstract

Purpose

This study aims to investigate the spectral illumination characteristics and geometric features of bicycle parts and proposes an image stitching method for their automatic visual inspection.

Design/methodology/approach

The unrealistic color casts of feature inspection is removed using white balance for global adjustment. The scale-invariant feature transforms (SIFT) is used to extract and detect the image features of image stitching. The Hough transform is used to detect the parameters of a circle for roundness of bicycle parts.

Findings

Results showed that maximum errors of 0°, 10°, 20°, 30°, 40° and 50° for the spectral illumination of white light light-emitting diode arrays with differential shift displacements are 4.4, 4.2, 7.8, 6.8, 8.1 and 3.5 per cent, respectively. The deviation error of image stitching for the stem accessory in x and y coordinates are 2 pixels. The SIFT and RANSAC enable to transform the stem image into local feature coordinates that are invariant to the illumination change.

Originality/value

This study can be applied to many fields of modern industrial manufacturing and provide useful information for automatic inspection and image stitching.

Details

Assembly Automation, vol. 34 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 26 August 2014

Lounis Chermak, Nabil Aouf and Mark Richardson

In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real issue for…

Abstract

Purpose

In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real issue for a majority of cameras due to limitations in their dynamic range. Indeed, over or under exposure might result in loss of essential information because of pixel saturation or noise. This can be critical in computer vision applications. High dynamic range (HDR) imaging technology is known to improve image rendering in such conditions. The purpose of this paper is to investigate the level of performance that can be achieved for feature detection and tracking operations in images acquired with a HDR image sensor.

Design/methodology/approach

In this study, four different feature detection techniques are selected and tracking algorithm is based on the pyramidal implementation of Kanade-Lucas-Tomasi (KLT) feature tracker. Tracking algorithm is run over image sequences acquired with a HDR image sensor and with a high resolution 5 Megapixel image sensor to comparatively assess them.

Findings

The authors demonstrate that tracking performance is greatly improved on image sequences acquired with HDR sensor. Number and percentage of finally tracked features are several times higher than what can be achieved with a 5 Megapixel image sensor.

Originality/value

The specific interest of this work focuses on the evaluation of tracking persistence of a set of initial detected features over image sequences taken in different scenes. This includes extreme illumination indoor and outdoor environments subject to direct sunlight exposure, backlighting, as well as dim light and dark scenarios.

Details

Kybernetes, vol. 43 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 31 July 2023

Xinzhi Cao, Yinsai Guo, Wenbin Yang, Xiangfeng Luo and Shaorong Xie

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a…

Abstract

Purpose

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively.

Design/methodology/approach

IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain.

Findings

Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets.

Research limitations/implications

Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared.

Originality/value

This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 January 2017

Shervan Fekriershad and Farshad Tajeripour

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…

Abstract

Purpose

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.

Design/methodology/approach

One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.

Findings

The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.

Originality/value

In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.

1 – 10 of 146