Search results

1 – 4 of 4
Article
Publication date: 14 November 2016

Anan Banharnsakun and Supannee Tanathong

Developing algorithms for automated detection and tracking of multiple objects is one challenge in the field of object tracking. Especially in a traffic video monitoring system…

Abstract

Purpose

Developing algorithms for automated detection and tracking of multiple objects is one challenge in the field of object tracking. Especially in a traffic video monitoring system, vehicle detection is an essential and challenging task. In the previous studies, many vehicle detection methods have been presented. These proposed approaches mostly used either motion information or characteristic information to detect vehicles. Although these methods are effective in detecting vehicles, their detection accuracy still needs to be improved. Moreover, the headlights and windshields, which are used as the vehicle features for detection in these methods, are easily obscured in some traffic conditions. The paper aims to discuss these issues.

Design/methodology/approach

First, each frame will be captured from a video sequence and then the background subtraction is performed by using the Mixture-of-Gaussians background model. Next, the Shi-Tomasi corner detection method is employed to extract the feature points from objects of interest in each foreground scene and the hierarchical clustering approach is then applied to cluster and form them into feature blocks. These feature blocks will be used to track the moving objects frame by frame.

Findings

Using the proposed method, it is possible to detect the vehicles in both day-time and night-time scenarios with a 95 percent accuracy rate and can cope with irrelevant movement (waving trees), which has to be deemed as background. In addition, the proposed method is able to deal with different vehicle shapes such as cars, vans, and motorcycles.

Originality/value

This paper presents a hierarchical clustering of features approach for multiple vehicles tracking in traffic environments to improve the capability of detection and tracking in case that the vehicle features are obscured in some traffic conditions.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 9 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 8 April 2020

Dheyaa Hussein

This study aims to provide a method to assess the perceptual impact of the visual complexity of building façades.

Abstract

Purpose

This study aims to provide a method to assess the perceptual impact of the visual complexity of building façades.

Design/methodology/approach

The research identifies the number of design elements and the variation in their position and colour as variables of visual complexity. It introduces the concepts of vertices and corners as atomic indicators on which the measurement of these variables is built. It measures visual complexity and its variables in images of building façades and analyses their relationships with participants' reactions. It reports on the effect of visual complexity on preferences, the adequacy of different methods in measuring visual complexity and the perceptual impact of each of its variables.

Findings

The research demonstrates that visual complexity can be assessed through the measure of its variables and their statistical mapping to users' preferences.

Originality/value

The manuscript provides the foundation for a planning/assessment tool for the visual control of the built environment using computer systems based on the preferences of residents through the examination of the relationship between the users and their environment. It creates a paradigm, which introduces a robust concept in the visual analysis of urban design.

Details

Smart and Sustainable Built Environment, vol. 9 no. 4
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 10 August 2020

Bin Li, Yu Yang, Chengshuai Qin, Xiao Bai and Lihui Wang

Focusing on the problem that the visual detection algorithm of navigation path line in intelligent harvester robot is susceptible to interference and low accuracy, a navigation…

Abstract

Purpose

Focusing on the problem that the visual detection algorithm of navigation path line in intelligent harvester robot is susceptible to interference and low accuracy, a navigation path detection algorithm based on improved random sampling consensus is proposed.

Design/methodology/approach

First, inverse perspective mapping was applied to the original images of rice or wheat to restore the three-dimensional spatial geometric relationship between rice or wheat rows. Second, set the target region and enhance the image to highlight the difference between harvested and unharvested rice or wheat regions. Median filter is used to remove the intercrop gap interference and improve the anti-interference ability of rice or wheat image segmentation. The third step is to apply the method of maximum variance to thresholding the rice or wheat images in the operation area. The image is further segmented with the single-point region growth, and the harvesting boundary corner is detected to improve the accuracy of the harvesting boundary recognition. Finally, fitting the harvesting boundary corner point as the navigation path line improves the real-time performance of crop image processing.

Findings

The experimental results demonstrate that the improved random sampling consensus with an average success rate of 94.6% has higher reliability than the least square method, probabilistic Hough and traditional random sampling consensus detection. It can extract the navigation line of the intelligent combine robot in real time at an average speed of 57.1 ms/frame.

Originality/value

In the precision agriculture technology, the accurate identification of the navigation path of the intelligent combine robot is the key to realize accurate positioning. In the vision navigation system of harvester, the extraction of navigation line is its core and key, which determines the speed and precision of navigation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 April 2019

Chengchao Bai, Jifeng Guo and Hongxing Zheng

The purpose of this paper is to verify the correctness and feasibility of simultaneous localization and mapping (SLAM) algorithm based on red-green-blue depth (RGB-D) camera in…

Abstract

Purpose

The purpose of this paper is to verify the correctness and feasibility of simultaneous localization and mapping (SLAM) algorithm based on red-green-blue depth (RGB-D) camera in high precision navigation and localization of celestial exploration rover.

Design/methodology/approach

First, a positioning algorithm based on depth camera is proposed. Second, the realization method is described from the five aspects of feature detection method, feature point matching, point cloud mapping, motion estimation and high precision optimization. Feature detection: taking the precision, real-time and motion basics as the comprehensive consideration, the ORB (oriented FAST and rotated BRIEF) features extraction method is adopted; feature point matching: solves the similarity measure of the feature descriptor vector and how to remove the mismatch point; point cloud mapping: the two-dimensional information on RGB and the depth information on D corresponding; motion estimation: the iterative closest point algorithm is used to solve point set registration; and high precision optimization: optimized by using the graph optimization method.

Findings

The proposed high-precision SLAM algorithm is very effective for solving high precision navigation and positioning of celestial exploration rover.

Research limitations/implications

In this paper, the simulation validation is based on an open source data set for testing; the physical verification is based on the existing unmanned vehicle platform to simulate the celestial exploration rover.

Practical implications

This paper presents a RGB-D camera-based navigation algorithm, which can be obtained by simulation experiment and physical verification. The real-time and accuracy of the algorithm are well behaved and have strong applicability, which can support the tests and experiments on hardware platform and have a better environmental adaptability.

Originality/value

The proposed SLAM algorithm can deal with the high precision navigation and positioning of celestial exploration rover effectively. Taking into account the current wide application prospect of computer vision, the method in this paper can provide a study foundation for the deep space probe.

Details

Aircraft Engineering and Aerospace Technology, vol. 91 no. 7
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 4 of 4