Search results
1 – 8 of 8Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz
Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…
Abstract
Purpose
Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).
Design/methodology/approach
Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.
Findings
In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.
Originality/value
An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.
Details
Keywords
Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono
The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.
Abstract
Purpose
The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.
Design/methodology/approach
The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.
Findings
The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.
Originality/value
The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.
Details
Keywords
Pingyang Zheng, Shaohua Han, Dingqi Xue, Ling Fu and Bifeng Jiang
Because of the advantages of high deposition efficiency and low manufacturing cost compared with other additive technologies, robotic wire arc additive manufacturing (WAAM…
Abstract
Purpose
Because of the advantages of high deposition efficiency and low manufacturing cost compared with other additive technologies, robotic wire arc additive manufacturing (WAAM) technology has been widely applied for fabricating medium- to large-scale metallic components. The additive manufacturing (AM) method is a relatively complex process, which involves the workpiece modeling, conversion of the model file, slicing, path planning and so on. Then the structure is formed by the accumulated weld bead. However, the poor forming accuracy of WAAM usually leads to severe dimensional deviation between the as-built and the predesigned structures. This paper aims to propose a visual sensing technology and deep learning–assisted WAAM method for fabricating metallic structure, to simplify the complex WAAM process and improve the forming accuracy.
Design/methodology/approach
Instead of slicing of the workpiece modeling and generating all the welding torch paths in advance of the fabricating process, this method is carried out by adding the feature point regression branch into the Yolov5 algorithm, to detect the feature point from the images of the as-built structure. The coordinates of the feature points of each deposition layer can be calculated automatically. Then the welding torch trajectory for the next deposition layer is generated based on the position of feature point.
Findings
The mean average precision score of modified YOLOv5 detector is 99.5%. Two types of overhanging structures have been fabricated by the proposed method. The center contour error between the actual and theoretical is 0.56 and 0.27 mm in width direction, and 0.43 and 0.23 mm in height direction, respectively.
Originality/value
The fabrication of circular overhanging structures without using the complicate slicing strategy, turning table or other extra support verified the possibility of the robotic WAAM system with deep learning technology.
Details
Keywords
Hu Luo, Haobin Ruan and Dawei Tu
The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images…
Abstract
Purpose
The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images problems such as detail loss, low contrast and color distortion, and verify the feasibility of the proposed methods through experiments.
Design/methodology/approach
The improved RGHS algorithm to enhance the original underwater target image is proposed, and then the YOLOv4 deep learning network for underwater small sample targets detection is improved based on the combination of traditional data expansion method and Mosaic algorithm, expanding the feature extraction capability with SPP (Spatial Pyramid Pooling) module after each feature extraction layer to extract richer feature information.
Findings
The experimental results, using the official dataset, reveal a 3.5% increase in average detection accuracy for three types of underwater biological targets compared to the traditional YOLOv4 algorithm. In underwater robot application testing, the proposed method achieves an impressive 94.73% average detection accuracy for the three types of underwater biological targets.
Originality/value
Underwater target detection is an important task for underwater robot application. However, most underwater targets have the characteristics of small samples, and the detection of small sample targets is a comprehensive problem because it is affected by the quality of underwater images. This paper provides a whole set of methods to solve the problems, which is of great significance to the application of underwater robot.
Details
Keywords
Long Zhao, Xiaoye Liu, Linxiang Li, Run Guo and Yang Chen
This study aims to realize efficient, fast and safe robot search task, the belief criteria decision-making approach is proposed to solve the object search task with an uncertain…
Abstract
Purpose
This study aims to realize efficient, fast and safe robot search task, the belief criteria decision-making approach is proposed to solve the object search task with an uncertain location.
Design/methodology/approach
The study formulates the robot search task as a partially observable Markov decision process, uses the semantic information to evaluate the belief state and designs the belief criteria decision-making approach. A cost function considering a trade-off among belief state, path length and movement effort is modelled to select the next best location in path planning.
Findings
The semantic information is successfully modelled and propagated, which can represent the belief of finding object. The belief criteria decision-making (BCDM) approach is evaluated in both Gazebo simulation platform and physical experiments. Compared to greedy, uniform and random methods, the performance index of path length and execution time is superior by BCDM approach.
Originality/value
The prior knowledge of robot working environment, especially semantic information, can be used for path planning to achieve efficient task execution in path length and execution time. The modelling and updating of environment information can lead a promising research topic to realize a more intelligent decision-making method for object search task.
Details
Keywords
Atefeh Hemmati, Mani Zarei and Amir Masoud Rahmani
Big data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of…
Abstract
Purpose
Big data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of data-driven applications and the advances in data analysis techniques, the potential for data-adaptive innovation in IoV applications becomes an outstanding development in future IoV. Therefore, this paper aims to focus on big data in IoV and to provide an analysis of the current state of research.
Design/methodology/approach
This review paper uses a systematic literature review methodology. It conducts a thorough search of academic databases to identify relevant scientific articles. By reviewing and analyzing the primary articles found in the big data in the IoV domain, 45 research articles from 2019 to 2023 were selected for detailed analysis.
Findings
This paper discovers the main applications, use cases and primary contexts considered for big data in IoV. Next, it documents challenges, opportunities, future research directions and open issues.
Research limitations/implications
This paper is based on academic articles published from 2019 to 2023. Therefore, scientific outputs published before 2019 are omitted.
Originality/value
This paper provides a thorough analysis of big data in IoV and considers distinct research questions corresponding to big data challenges and opportunities in IoV. It also provides valuable insights for researchers and practitioners in evolving this field by examining the existing fields and future directions for big data in the IoV ecosystem.
Details
Keywords
Abdul Hannan Qureshi, Wesam Salah Alaloul, Wong Kai Wing, Syed Saad, Khalid Mhmoud Alzubi and Muhammad Ali Musarat
Rebar is the prime component of reinforced concrete structures, and rebar monitoring is a time-consuming and technical job. With the emergence of the fourth industrial revolution…
Abstract
Purpose
Rebar is the prime component of reinforced concrete structures, and rebar monitoring is a time-consuming and technical job. With the emergence of the fourth industrial revolution, the construction industry practices have evolved toward digitalization. Still, hesitation remains among stakeholders toward the adoption of advanced technologies and one of the significant reasons is the unavailability of knowledge frameworks and implementation guidelines. This study aims to investigate technical factors impacting automated monitoring of rebar for the understanding, confidence gain and effective implementation by construction industry stakeholders.
Design/methodology/approach
A structured study pipeline has been adopted, which includes a systematic literature collection, semistructured interviews, pilot survey, questionnaire survey and statistical analyses via merging two techniques, i.e. structural equation modeling and relative importance index.
Findings
The achieved model highlights “digital images” and “scanning” as two main categories being adopted for automated rebar monitoring. Moreover, “external influence”, “data-capturing”, “image quality”, and “environment” have been identified as the main factors under “digital images”. On the other hand, “object distance”, “rebar shape”, “occlusion” and “rebar spacing” have been highlighted as the main contributing factors under “scanning”.
Originality/value
The study provides a base guideline for the construction industry stakeholders to gain confidence in automated monitoring of rebar via vision-based technologies and effective implementation of the progress-monitoring processes. This study, via structured data collection, performed qualitative and quantitative analyses to investigate technical factors for effective rebar monitoring via vision-based technologies in the form of a mathematical model.
Details
Keywords
Matthew Peebles, Shen Hin Lim, Mike Duke, Benjamin Mcguinness and Chi Kit Au
Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and…
Abstract
Purpose
Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and localizing asparagus in the field based on point clouds from ToF imaging. Since the semantics are not included in the point cloud, it contains the geometric information of other objects such as stones and weeds other than asparagus spears. An approach is required for extracting the spear information so that a robotic system can be used for harvesting.
Design/methodology/approach
A real-time convolutional neural network (CNN)-based method is used for filtering the point cloud generated by a ToF camera, allowing subsequent processing methods to operate over smaller and more information-dense data sets, resulting in reduced processing time. The segmented point cloud can then be split into clusters of points representing each individual spear. Geometric filters are developed to eliminate the non-asparagus points in each cluster so that each spear can be modelled and localized. The spear information can then be used for harvesting decisions.
Findings
The localization system is integrated into a robotic harvesting prototype system. Several field trials have been conducted with satisfactory performance. The identification of a spear from the point cloud is the key to successful localization. Segmentation and clustering points into individual spears are two major failures for future improvements.
Originality/value
Most crop localizations in agricultural robotic applications using ToF imaging technology are implemented in a very controlled environment, such as a greenhouse. The target crop and the robotic system are stationary during the localization process. The novel proposed method for asparagus localization has been tested in outdoor farms and integrated with a robotic harvesting platform. Asparagus detection and localization are achieved in real time on a continuously moving robotic platform in a cluttered and unstructured environment.
Details