Search results

1 – 10 of 74
Article
Publication date: 30 November 2020

Anton Saveliev, Egor Aksamentov and Evgenii Karasev

The purpose of this paper is to analyze the development of a novel approach for automated terrain mapping a robotic vehicles path tracing.

Abstract

Purpose

The purpose of this paper is to analyze the development of a novel approach for automated terrain mapping a robotic vehicles path tracing.

Design/methodology/approach

The approach includes stitching of images, obtained from unmanned aerial vehicle, based on ORB descriptors, into an orthomosaic image and the GPS-coordinates are binded to the corresponding pixels of the map. The obtained image is fed to a neural network MASK R-CNN for detection and classification regions, which are potentially dangerous for robotic vehicles motion. To visualize the obtained map and obstacles on it, the authors propose their own application architecture. Users can any time edit the present areas or add new ones, which are not intended for robotic vehicles traffic. Then the GPS-coordinates of these areas are passed to robotic vehicles and the optimal route is traced based on this data

Findings

The developed approach allows revealing impassable regions on terrain map and associating them with GPS-coordinates, whereas these regions can be edited by the user.

Practical implications

The total duration of the algorithm, including the step with Mask R-CNN network on the same dataset of 120 items was 7.5 s.

Originality/value

Creating an orthophotomap from 120 images with image resolution of 470 × 425 px requires less than 6 s on a laptop with moderate computing power, what justifies using such algorithms in the field without any powerful and expensive hardware.

Details

International Journal of Intelligent Unmanned Systems, vol. 10 no. 2/3
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 22 September 2022

Tao Li, Yexin Lyu, Ziyi Guo, Lei Du and Fengyuan Zou

The main purpose is to construct the mapping relationship between garment flat and pattern. Particle swarm optimization–least-squares support vector machine (PSO-LSSVM), the…

Abstract

Purpose

The main purpose is to construct the mapping relationship between garment flat and pattern. Particle swarm optimization–least-squares support vector machine (PSO-LSSVM), the data-driven model, is proposed for predicting the pattern design dimensions based on small sample sizes by digitizing the experience of the patternmakers.

Design/methodology/approach

For this purpose, the sleeve components were automatically localized and segmented from the garment flat by the Mask R-CNN. The sleeve flat measurements were extracted by the Douglas–Peucker algorithm. Then, the PSO algorithm was used to optimize the LSSVM parameters. PSO-LSSVM was trained by utilizing the experience of patternmakers.

Findings

The experimental results demonstrated that the PSO-LSSVM model can effectively improve the generation ability and prediction accuracy in pattern design dimensions, even with small sample sizes. The mean square error could reach 1.057 ± 0.06. The fluctuation range of absolute error was smaller than the others such as pure LSSVM, backpropagation and radial basis function prediction models.

Originality/value

By constructing the mapping relationship between sleeve flat and pattern, the problems of the garment flat objective recognition and pattern design dimensions accurate prediction were solved. Meanwhile, the proposed method overcomes the problem that the parameters are determined by PSO rather than empirically. This framework could be extended to other garment components.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 20 December 2022

Biyanka Ekanayake, Alireza Ahmadian Fard Fini, Johnny Kwok Wai Wong and Peter Smith

Recognising the as-built state of construction elements is crucial for construction progress monitoring. Construction scholars have used computer vision-based algorithms to…

Abstract

Purpose

Recognising the as-built state of construction elements is crucial for construction progress monitoring. Construction scholars have used computer vision-based algorithms to automate this process. Robust object recognition from indoor site images has been inhibited by technical challenges related to indoor objects, lighting conditions and camera positioning. Compared with traditional machine learning algorithms, one-stage detector deep learning (DL) algorithms can prioritise the inference speed, enable real-time accurate object detection and classification. This study aims to present a DL-based approach to facilitate the as-built state recognition of indoor construction works.

Design/methodology/approach

The one-stage DL-based approach was built upon YOLO version 4 (YOLOv4) algorithm using transfer learning with few hyperparameters customised and trained in the Google Colab virtual machine. The process of framing, insulation and drywall installation of indoor partitions was selected as the as-built scenario. For training, images were captured from two indoor sites with publicly available online images.

Findings

The DL model reported a best-trained weight with a mean average precision of 92% and an average loss of 0.83. Compared to previous studies, the automation level of this study is high due to the use of fixed time-lapse cameras for data collection and zero manual intervention from the pre-processing algorithms to enhance visual quality of indoor images.

Originality/value

This study extends the application of DL models for recognising as-built state of indoor construction works upon providing training images. Presenting a workflow on training DL models in a virtual machine platform by reducing the computational complexities associated with DL models is also materialised.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 13 July 2023

Haolin Fei, Ziwei Wang, Stefano Tedeschi and Andrew Kennedy

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Abstract

Purpose

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Design/methodology/approach

The authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.

Findings

The experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.

Originality/value

This paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 19 March 2024

Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…

Abstract

Purpose

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).

Design/methodology/approach

Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.

Findings

In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.

Originality/value

An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 23 November 2021

Srinivas Talasila, Kirti Rawal and Gaurav Sethi

Extraction of leaf region from the plant leaf images is a prerequisite process for species recognition, disease detection and classification and so on, which are required for crop…

Abstract

Purpose

Extraction of leaf region from the plant leaf images is a prerequisite process for species recognition, disease detection and classification and so on, which are required for crop management. Several approaches were developed to implement the process of leaf region segmentation from the background. However, most of the methods were applied to the images taken under laboratory setups or plain background, but the application of leaf segmentation methods is vital to be used on real-time cultivation field images that contain complex backgrounds. So far, the efficient method that automatically segments leaf region from the complex background exclusively for black gram plant leaf images has not been developed.

Design/methodology/approach

Extracting leaf regions from the complex background is cumbersome, and the proposed PLRSNet (Plant Leaf Region Segmentation Net) is one of the solutions to this problem. In this paper, a customized deep network is designed and applied to extract leaf regions from the images taken from cultivation fields.

Findings

The proposed PLRSNet compared with the state-of-the-art methods and the experimental results evident that proposed PLRSNet yields 96.9% of Similarity Index/Dice, 94.2% of Jaccard/IoU, 98.55% of Correct Detection Ratio, Total Segmentation Error of 0.059 and Average Surface Distance of 3.037, representing a significant improvement over existing methods particularly taking into account of cultivation field images.

Originality/value

In this work, a customized deep learning network is designed for segmenting plant leaf region under complex background and named it as a PLRSNet.

Details

International Journal of Intelligent Unmanned Systems, vol. 11 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 8 September 2023

Tolga Özer and Ömer Türkmen

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use…

Abstract

Purpose

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use of solar panels is becoming widespread, and control problems are increasing. Physical control of the solar panels is critical in obtaining electrical power. Controlling solar panel power plants and rooftop panel applications installed in large areas can be difficult and time-consuming. Therefore, this paper designs a system that aims to panel detection.

Design/methodology/approach

This paper designed a low-cost AI-based unmanned aerial vehicle to reduce the difficulty of the control process. Convolutional neural network based AI models were developed to classify solar panels as damaged, dusty and normal. Two approaches to the solar panel detection model were adopted: Approach 1 and Approach 2.

Findings

The training was conducted with YOLOv5, YOLOv6 and YOLOv8 models in Approach 1. The best F1 score was 81% at 150 epochs with YOLOv5m. In total, 87% and 89% of the best F1 score and mAP values were obtained with the YOLOv5s model at 100 epochs in Approach 2 as a proposed method. The best models at Approaches 1 and 2 were used with a developed AI-based drone in the real-time test application.

Originality/value

The AI-based low-cost solar panel detection drone was developed with an original data set of 1,100 images. A detailed comparative analysis of YOLOv5, YOLOv6 and YOLOv8 models regarding performance metrics was realized. Gaussian, salt-pepper noise addition and wavelet transform noise removal preprocessing techniques were applied to the created data set under the proposed method. The proposed method demonstrated expressive and remarkable performance in panel detection applications.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 21 June 2021

Zhoufeng Liu, Shanliang Liu, Chunlei Li and Bicao Li

This paper aims to propose a new method to solve the two problems in fabric defect detection. Current state-of-the-art industrial products defect detectors are deep…

Abstract

Purpose

This paper aims to propose a new method to solve the two problems in fabric defect detection. Current state-of-the-art industrial products defect detectors are deep learning-based, which incurs some additional problems: (1) The model is difficult to train due to too few fabric datasets for the difficulty of collecting pictures; (2) The detection accuracy of existing methods is insufficient to implement in the industrial field. This study intends to propose a new method which can be applied to fabric defect detection in the industrial field.

Design/methodology/approach

To cope with exist fabric defect detection problems, the article proposes a novel fabric defect detection method based on multi-source feature fusion. In the training process, both layer features and source model information are fused to enhance robustness and accuracy. Additionally, a novel training model called multi-source feature fusion (MSFF) is proposed to tackle the limited samples and demand to obtain fleet and precise quantification automatically.

Findings

The paper provides a novel fabric defect detection method, experimental results demonstrate that the proposed method achieves an AP of 93.9 and 98.8% when applied to the TILDA(a public dataset) and ZYFD datasets (a real-shot dataset), respectively, and outperforms 5.9% than fine-tuned SSD (single shot multi-box detector).

Research limitations/implications

Our proposed algorithm can provide a promising tool for fabric defect detection.

Practical implications

The paper includes implications for the development of a powerful brand image, the development of “brand ambassadors” and for managing the balance between stability and change.

Social implications

This work provides technical support for real-time detection on industrial sites, advances the process of intelligent manual detection of fabric defects and provides a technical reference for object detection on other industrial

Originality/value

Therefore, our proposed algorithm can provide a promising tool for fabric defect detection.

Details

International Journal of Clothing Science and Technology, vol. 34 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 27 July 2023

Navodana Rodrigo, Hossein Omrany, Ruidong Chang and Jian Zuo

This study aims to investigate the literature related to the use of digital technologies for promoting circular economy (CE) in the construction industry.

Abstract

Purpose

This study aims to investigate the literature related to the use of digital technologies for promoting circular economy (CE) in the construction industry.

Design/methodology/approach

A comprehensive approach was adopted, involving bibliometric analysis, text-mining analysis and content analysis to meet three objectives (1) to unveil the evolutionary progress of the field, (2) to identify the key research themes in the field and (3) to identify challenges hindering the implementation of digital technologies for CE.

Findings

A total of 365 publications was analysed. The results revealed eight key digital technologies categorised into two main clusters including “digitalisation and advanced technologies” and “sustainable construction technologies”. The former involved technologies, namely machine learning, artificial intelligence, deep learning, big data analytics and object detection and computer vision that were used for (1) forecasting construction and demolition (C&D) waste generation, (2) waste identification and classification and (3) computer vision for waste management. The latter included technologies such as Internet of Things (IoT), blockchain and building information modelling (BIM) that help optimise resource use, enhance transparency and sustainability practices in the industry. Overall, these technologies show great potential for improving waste management and enabling CE in construction.

Originality/value

This research employs a holistic approach to provide a status-quo understanding of the digital technologies that can be utilised to support the implementation of CE in construction. Further, this study underlines the key challenges associated with adopting digital technologies, whilst also offering opportunities for future improvement of the field.

Details

Smart and Sustainable Built Environment, vol. 13 no. 1
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 22 July 2022

Ying Tao Chai and Ting-Kwei Wang

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection…

Abstract

Purpose

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection of surface defects requires inspectors to judge, evaluate and make decisions, which requires sufficient experience and is time-consuming and labor-intensive, and the expertise cannot be effectively preserved and transferred. In addition, the evaluation standards of different inspectors are not identical, which may lead to cause discrepancies in inspection results. Although computer vision can achieve defect recognition, there is a gap between the low-level semantics acquired by computer vision and the high-level semantics that humans understand from images. Therefore, computer vision and ontology are combined to achieve intelligent evaluation and decision-making and to bridge the above gap.

Design/methodology/approach

Combining ontology and computer vision, this paper establishes an evaluation and decision-making framework for concrete surface quality. By establishing concrete surface quality ontology model and defect identification quantification model, ontology reasoning technology is used to realize concrete surface quality evaluation and decision-making.

Findings

Computer vision can identify and quantify defects, obtain low-level image semantics, and ontology can structurally express expert knowledge in the field of defects. This proposed framework can automatically identify and quantify defects, and infer the causes, responsibility, severity and repair methods of defects. Through case analysis of various scenarios, the proposed evaluation and decision-making framework is feasible.

Originality/value

This paper establishes an evaluation and decision-making framework for concrete surface quality, so as to improve the standardization and intelligence of surface defect inspection and potentially provide reusable knowledge for inspecting concrete surface quality. The research results in this paper can be used to detect the concrete surface quality, reduce the subjectivity of evaluation and improve the inspection efficiency. In addition, the proposed framework enriches the application scenarios of ontology and computer vision, and to a certain extent bridges the gap between the image features extracted by computer vision and the information that people obtain from images.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of 74