Search results

1 – 10 of over 9000
Article
Publication date: 3 May 2019

Pandia Rajan Jeyaraj and Edward Rajan Samuel Nadar

The purpose of this paper is to focus on the design and development of computer-aided fabric defect detection and classification employing advanced learning algorithm.

1165

Abstract

Purpose

The purpose of this paper is to focus on the design and development of computer-aided fabric defect detection and classification employing advanced learning algorithm.

Design/methodology/approach

To make a fast and effective classification of fabric defect, the authors have considered a characteristic of texture, namely its colour. A deep convolutional neural network is formed to learn from the training phase of various defect data sets. In the testing phase, the authors have utilised a learning feature for defect classification.

Findings

The improvement in the defect classification accuracy has been achieved by employing deep learning algorithm. The authors have tested the defect classification accuracy on six different fabric materials and have obtained an average accuracy of 96.55 per cent with 96.4 per cent sensitivity and 0.94 success rate.

Practical implications

The authors had evaluated the method by using 20 different data sets collected from different raw fabrics. Also, the authors have tested the algorithm in standard data set provided by Ministry of Textile. In the testing task, the authors have obtained an average accuracy of 94.85 per cent, with six defects being successfully recognised by the proposed algorithm.

Originality/value

The quantitative value of performance index shows the effectiveness of developed classification algorithm. Moreover, the computational time for different fabric processing was presented to verify the computational range of proposed algorithm with the conventional fabric processing techniques. Hence, this proposed computer vision-based fabric defects detection system is used for an accurate defect detection and computer-aided analysis system.

Details

International Journal of Clothing Science and Technology, vol. 31 no. 4
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 16 February 2021

Elena Villaespesa and Seth Crider

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags…

Abstract

Purpose

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems.

Design/methodology/approach

This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags.

Findings

This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results.

Practical implications

This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects.

Originality/value

The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.

Details

Journal of Documentation, vol. 77 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 June 2018

Long Xin, Delin Luo and Han Li

The purpose of this paper is to develop a monocular visual measurement system for autonomous aerial refueling (AAR) for unmanned aerial vehicle, which can process images from an…

Abstract

Purpose

The purpose of this paper is to develop a monocular visual measurement system for autonomous aerial refueling (AAR) for unmanned aerial vehicle, which can process images from an infrared camera to estimate the pose of the drogue in the tanker with high accuracy and real-time performance.

Design/methodology/approach

Methods and techniques for marker detection, feature matching and pose estimation have been designed and implemented in the visual measurement system.

Findings

The simple blob detection (SBD) method is adopted, which outperforms the Laplacian of Gaussian method. And a novel noise-elimination algorithm is proposed for excluding the noise points. Besides, a novel feature matching algorithm based on perspective transformation is proposed. Comparative experimental results indicated the rapidity and effectiveness of the proposed methods.

Practical implications

The visual measurement system developed in this paper can be applied to estimate the pose of the drogue with a fast speed and high accuracy and it is a feasible measurement strategy which will considerably increase the autonomy and reliability for AAR.

Originality/value

The SBD method is used to detect the features and a novel noise-elimination algorithm is proposed. Besides, a novel feature matching algorithm based on perspective transformation is proposed which is robust and accurate.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 21 May 2021

Chang Liu, Samad M.E. Sepasgozar, Sara Shirowzhan and Gelareh Mohammadi

The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction…

1007

Abstract

Purpose

The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction industry due to a lack of expertise and the limited reliable applications for AI technology. Hence, this paper aims to present the detailed outcome of experimentations evaluating the applicability and the performance of AI object detection algorithms for construction modular object detection.

Design/methodology/approach

This paper provides a thorough evaluation of two deep learning algorithms for object detection, including the faster region-based convolutional neural network (faster RCNN) and single shot multi-box detector (SSD). Two types of metrics are also presented; first, the average recall and mean average precision by image pixels; second, the recall and precision by counting. To conduct the experiments using the selected algorithms, four infrastructure and building construction sites are chosen to collect the required data, including a total of 990 images of three different but common modular objects, including modular panels, safety barricades and site fences.

Findings

The results of the comprehensive evaluation of the algorithms show that the performance of faster RCNN and SSD depends on the context that detection occurs. Indeed, surrounding objects and the backgrounds of the objects affect the level of accuracy obtained from the AI analysis and may particularly effect precision and recall. The analysis of loss lines shows that the loss lines for selected objects depend on both their geometry and the image background. The results on selected objects show that faster RCNN offers higher accuracy than SSD for detection of selected objects.

Research limitations/implications

The results show that modular object detection is crucial in construction for the achievement of the required information for project quality and safety objectives. The detection process can significantly improve monitoring object installation progress in an accurate and machine-based manner avoiding human errors. The results of this paper are limited to three construction sites, but future investigations can cover more tasks or objects from different construction sites in a fully automated manner.

Originality/value

This paper’s originality lies in offering new AI applications in modular construction, using a large first-hand data set collected from three construction sites. Furthermore, the paper presents the scientific evaluation results of implementing recent object detection algorithms across a set of extended metrics using the original training and validation data sets to improve the generalisability of the experimentation. This paper also provides the practitioners and scholars with a workflow on AI applications in the modular context and the first-hand referencing data.

Article
Publication date: 10 August 2020

Bin Li, Yu Yang, Chengshuai Qin, Xiao Bai and Lihui Wang

Focusing on the problem that the visual detection algorithm of navigation path line in intelligent harvester robot is susceptible to interference and low accuracy, a navigation…

Abstract

Purpose

Focusing on the problem that the visual detection algorithm of navigation path line in intelligent harvester robot is susceptible to interference and low accuracy, a navigation path detection algorithm based on improved random sampling consensus is proposed.

Design/methodology/approach

First, inverse perspective mapping was applied to the original images of rice or wheat to restore the three-dimensional spatial geometric relationship between rice or wheat rows. Second, set the target region and enhance the image to highlight the difference between harvested and unharvested rice or wheat regions. Median filter is used to remove the intercrop gap interference and improve the anti-interference ability of rice or wheat image segmentation. The third step is to apply the method of maximum variance to thresholding the rice or wheat images in the operation area. The image is further segmented with the single-point region growth, and the harvesting boundary corner is detected to improve the accuracy of the harvesting boundary recognition. Finally, fitting the harvesting boundary corner point as the navigation path line improves the real-time performance of crop image processing.

Findings

The experimental results demonstrate that the improved random sampling consensus with an average success rate of 94.6% has higher reliability than the least square method, probabilistic Hough and traditional random sampling consensus detection. It can extract the navigation line of the intelligent combine robot in real time at an average speed of 57.1 ms/frame.

Originality/value

In the precision agriculture technology, the accurate identification of the navigation path of the intelligent combine robot is the key to realize accurate positioning. In the vision navigation system of harvester, the extraction of navigation line is its core and key, which determines the speed and precision of navigation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 9 January 2024

Zhuoyu Zhang, Lijia Zhong, Mingwei Lin, Ri Lin and Dejun Li

Docking technology plays a crucial role in enabling long-duration operations of autonomous underwater vehicles (AUVs). Visual positioning solutions alone are susceptible to…

Abstract

Purpose

Docking technology plays a crucial role in enabling long-duration operations of autonomous underwater vehicles (AUVs). Visual positioning solutions alone are susceptible to abnormal drift values due to the challenging underwater optical imaging environment. When an AUV approaches the docking station, the absolute positioning method fails if the AUV captures an insufficient number of tracers. This study aims to to provide a more stable absolute position visual positioning method for underwater terminal visual docking.

Design/methodology/approach

This paper presents a six-degree-of-freedom positioning method for AUV terminal visual docking, which uses lights and triangle codes. The authors use an extended Kalman filter to fuse the visual calculation results with inertial measurement unit data. Moreover, this paper proposes a triangle code recognition and positioning algorithm.

Findings

The authors conducted a simulation experiment to compare the underwater positioning performance of triangle codes, AprilTag and Aruco. The results demonstrate that the implemented triangular code reduces the running time by over 70% compared to the other two codes, and also exhibits a longer recognition distance in turbid environments. Subsequent experiments were carried out in Qingjiang Lake, Hubei Province, China, which further confirmed the effectiveness of the proposed positioning algorithm.

Originality/value

This fusion approach effectively mitigates abnormal drift errors stemming from visual positioning and cumulative errors resulting from inertial navigation. The authors also propose a triangle code recognition and positioning algorithm as a supplementary approach to overcome the limitations of tracer light positioning beacons.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 March 1991

Roy Davies

The design of vision algorithms for industrial applications is often considered to be an artform. In this article Roy Davies demonstrates that it can be a science.

Abstract

The design of vision algorithms for industrial applications is often considered to be an artform. In this article Roy Davies demonstrates that it can be a science.

Details

Sensor Review, vol. 11 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 February 1981

C. Braccini, G. Gambardella, G. Sandini and V. Tagliasco

A technique in which the features of retinal receptors, receptive fields of the peripheral cells and cortical retinotopic mapping can be combined to perform a template matching…

Abstract

A technique in which the features of retinal receptors, receptive fields of the peripheral cells and cortical retinotopic mapping can be combined to perform a template matching system requiring a single reference pattern has been devised for artificial vision algorithms.

Details

Sensor Review, vol. 1 no. 2
Type: Research Article
ISSN: 0260-2288

Content available
Article
Publication date: 1 August 2001

Jon Rigelsford

121

Abstract

Details

Industrial Robot: An International Journal, vol. 28 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 9000