Search results

1 – 10 of 147
Article
Publication date: 2 May 2024

Mikias Gugssa, Long Li, Lina Pu, Ali Gurbuz, Yu Luo and Jun Wang

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However…

Abstract

Purpose

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However, it is still challenging to implement automated safety monitoring methods in near real time or in a time-efficient manner in real construction practices. Therefore, this study developed a novel solution to enhance the time efficiency to achieve near-real-time safety glove detection and meanwhile preserve data privacy.

Design/methodology/approach

The developed method comprises two primary components: (1) transfer learning methods to detect safety gloves and (2) edge computing to improve time efficiency and data privacy. To compare the developed edge computing-based method with the currently widely used cloud computing-based methods, a comprehensive comparative analysis was conducted from both the implementation and theory perspectives, providing insights into the developed approach’s performance.

Findings

Three DL models achieved mean average precision (mAP) scores ranging from 74.92% to 84.31% for safety glove detection. The other two methods by combining object detection and classification achieved mAP as 89.91% for hand detection and 100% for glove classification. From both implementation and theory perspectives, the edge computing-based method detected gloves faster than the cloud computing-based method. The edge computing-based method achieved a detection latency of 36%–68% shorter than the cloud computing-based method in the implementation perspective. The findings highlight edge computing’s potential for near-real-time detection with improved data privacy.

Originality/value

This study implemented and evaluated DL-based safety monitoring methods on different computing infrastructures to investigate their time efficiency. This study contributes to existing knowledge by demonstrating how edge computing can be used with DL models (without sacrificing their performance) to improve PPE-glove monitoring in a time-efficient manner as well as maintain data privacy.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 17 February 2022

Prajakta Thakare and Ravi Sankar V.

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating…

Abstract

Purpose

Agriculture is the backbone of a country, contributing more than half of the sector of economy throughout the world. The need for precision agriculture is essential in evaluating the conditions of the crops with the aim of determining the proper selection of pesticides. The conventional method of pest detection fails to be stable and provides limited accuracy in the prediction. This paper aims to propose an automatic pest detection module for the accurate detection of pests using the hybrid optimization controlled deep learning model.

Design/methodology/approach

The paper proposes an advanced pest detection strategy based on deep learning strategy through wireless sensor network (WSN) in the agricultural fields. Initially, the WSN consisting of number of nodes and a sink are clustered as number of clusters. Each cluster comprises a cluster head (CH) and a number of nodes, where the CH involves in the transfer of data to the sink node of the WSN and the CH is selected using the fractional ant bee colony optimization (FABC) algorithm. The routing process is executed using the protruder optimization algorithm that helps in the transfer of image data to the sink node through the optimal CH. The sink node acts as the data aggregator and the collection of image data thus obtained acts as the input database to be processed to find the type of pest in the agricultural field. The image data is pre-processed to remove the artifacts present in the image and the pre-processed image is then subjected to feature extraction process, through which the significant local directional pattern, local binary pattern, local optimal-oriented pattern (LOOP) and local ternary pattern (LTP) features are extracted. The extracted features are then fed to the deep-convolutional neural network (CNN) in such a way to detect the type of pests in the agricultural field. The weights of the deep-CNN are tuned optimally using the proposed MFGHO optimization algorithm that is developed with the combined characteristics of navigating search agents and the swarming search agents.

Findings

The analysis using insect identification from habitus image Database based on the performance metrics, such as accuracy, specificity and sensitivity, reveals the effectiveness of the proposed MFGHO-based deep-CNN in detecting the pests in crops. The analysis proves that the proposed classifier using the FABC+protruder optimization-based data aggregation strategy obtains an accuracy of 94.3482%, sensitivity of 93.3247% and the specificity of 94.5263%, which is high as compared to the existing methods.

Originality/value

The proposed MFGHO optimization-based deep-CNN is used for the detection of pest in the crop fields to ensure the better selection of proper cost-effective pesticides for the crop fields in such a way to increase the production. The proposed MFGHO algorithm is developed with the integrated characteristic features of navigating search agents and the swarming search agents in such a way to facilitate the optimal tuning of the hyperparameters in the deep-CNN classifier for the detection of pests in the crop fields.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 9 May 2024

Anna Korotysheva and Sergey Zhukov

This study aims to comprehensively address the challenge of delineating traffic scenarios in video footage captured by an embedded camera within an autonomous vehicle.

Abstract

Purpose

This study aims to comprehensively address the challenge of delineating traffic scenarios in video footage captured by an embedded camera within an autonomous vehicle.

Design/methodology/approach

This methodology involves systematically elucidating the traffic context by leveraging data from the object recognition subsystem embedded in vehicular road infrastructure. A knowledge base containing production rules and logical inference mechanism was developed. These components enable real-time procedures for describing traffic situations.

Findings

The production rule system focuses on semantically modeling entities that are categorized as traffic lights and road signs. The effectiveness of the methodology was tested experimentally using diverse image datasets representing various meteorological conditions. A thorough analysis of the results was conducted, which opens avenues for future research.

Originality/value

Originality lies in the potential integration of the developed methodology into an autonomous vehicle’s control system, working alongside other procedures that analyze the current situation. These applications extend to driver assistance systems, harmonized with augmented reality technology, and enhance human decision-making processes.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 6 May 2024

Pablo Guillén, Hector Sarnago, Oscar Lucia and José M. Burdio

The purpose of this paper is to develop a load detection method for domestic induction cooktops. The solution aims to minimize its impact in the converter power transmission while…

Abstract

Purpose

The purpose of this paper is to develop a load detection method for domestic induction cooktops. The solution aims to minimize its impact in the converter power transmission while enabling the estimation of the equivalent electrical parameters of the load. This method is suitable for a multi-output resonant inverter topology with shared power devices.

Design/methodology/approach

The considered multi-output converter presents power devices that are shared between several loads. Thus, applying load detection methods in the literature requires a halt in the power transfer to ensuring safe operation. The proposed method uses a complementary short-voltage pulse to excite the induction heating (IH) coil without stopping the power transfer to the remaining IH loads. With the current through the coil and the analytical equations, the equivalent inductance and resistance of the load is estimated. The precision of the method has been evaluated by simulation, and experimental results are provided.

Findings

The measurement of the current through the induction coil as a response to a short-time single-pulse voltage variation provides enough information to estimate the load equivalent parameters, allowing to differentiate between no-load, non-suitable IH load and suitable IH load situations.

Originality/value

The proposed method provides a solution for load detection without requiring additional circuitry. It aims for low power transmission to the load and ensures zero-voltage switching and reduced peak current even in no-load cases. Moreover, the proposed solution is extensible to less complex converters, as the half bridge.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 8 April 2024

Matthew Peebles, Shen Hin Lim, Mike Duke, Benjamin Mcguinness and Chi Kit Au

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and…

Abstract

Purpose

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and localizing asparagus in the field based on point clouds from ToF imaging. Since the semantics are not included in the point cloud, it contains the geometric information of other objects such as stones and weeds other than asparagus spears. An approach is required for extracting the spear information so that a robotic system can be used for harvesting.

Design/methodology/approach

A real-time convolutional neural network (CNN)-based method is used for filtering the point cloud generated by a ToF camera, allowing subsequent processing methods to operate over smaller and more information-dense data sets, resulting in reduced processing time. The segmented point cloud can then be split into clusters of points representing each individual spear. Geometric filters are developed to eliminate the non-asparagus points in each cluster so that each spear can be modelled and localized. The spear information can then be used for harvesting decisions.

Findings

The localization system is integrated into a robotic harvesting prototype system. Several field trials have been conducted with satisfactory performance. The identification of a spear from the point cloud is the key to successful localization. Segmentation and clustering points into individual spears are two major failures for future improvements.

Originality/value

Most crop localizations in agricultural robotic applications using ToF imaging technology are implemented in a very controlled environment, such as a greenhouse. The target crop and the robotic system are stationary during the localization process. The novel proposed method for asparagus localization has been tested in outdoor farms and integrated with a robotic harvesting platform. Asparagus detection and localization are achieved in real time on a continuously moving robotic platform in a cluttered and unstructured environment.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 April 2024

Jacqueline Humphries, Pepijn Van de Ven, Nehal Amer, Nitin Nandeshwar and Alan Ryan

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored…

Abstract

Purpose

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot.

Design/methodology/approach

A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field.

Findings

Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement. R2 is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences.

Originality/value

In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 29 March 2024

Pingyang Zheng, Shaohua Han, Dingqi Xue, Ling Fu and Bifeng Jiang

Because of the advantages of high deposition efficiency and low manufacturing cost compared with other additive technologies, robotic wire arc additive manufacturing (WAAM…

Abstract

Purpose

Because of the advantages of high deposition efficiency and low manufacturing cost compared with other additive technologies, robotic wire arc additive manufacturing (WAAM) technology has been widely applied for fabricating medium- to large-scale metallic components. The additive manufacturing (AM) method is a relatively complex process, which involves the workpiece modeling, conversion of the model file, slicing, path planning and so on. Then the structure is formed by the accumulated weld bead. However, the poor forming accuracy of WAAM usually leads to severe dimensional deviation between the as-built and the predesigned structures. This paper aims to propose a visual sensing technology and deep learning–assisted WAAM method for fabricating metallic structure, to simplify the complex WAAM process and improve the forming accuracy.

Design/methodology/approach

Instead of slicing of the workpiece modeling and generating all the welding torch paths in advance of the fabricating process, this method is carried out by adding the feature point regression branch into the Yolov5 algorithm, to detect the feature point from the images of the as-built structure. The coordinates of the feature points of each deposition layer can be calculated automatically. Then the welding torch trajectory for the next deposition layer is generated based on the position of feature point.

Findings

The mean average precision score of modified YOLOv5 detector is 99.5%. Two types of overhanging structures have been fabricated by the proposed method. The center contour error between the actual and theoretical is 0.56 and 0.27 mm in width direction, and 0.43 and 0.23 mm in height direction, respectively.

Originality/value

The fabrication of circular overhanging structures without using the complicate slicing strategy, turning table or other extra support verified the possibility of the robotic WAAM system with deep learning technology.

Details

Rapid Prototyping Journal, vol. 30 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 30 April 2024

Shiqing Wu, Jiahai Wang, Haibin Jiang and Weiye Xue

The purpose of this study is to explore a new assembly process planning and execution mode to realize rapid response, reduce the labor intensity of assembly workers and improve…

Abstract

Purpose

The purpose of this study is to explore a new assembly process planning and execution mode to realize rapid response, reduce the labor intensity of assembly workers and improve the assembly efficiency and quality.

Design/methodology/approach

Based on the related concepts of digital twin, this paper studies the product assembly planning in digital space, the process execution in physical space and the interaction between digital space and physical space. The assembly process planning is simulated and verified in the digital space to generate three-dimensional visual assembly process specification documents, the implementation of the assembly process specification documents in the physical space is monitored and feed back to revise the assembly process and improve the assembly quality.

Findings

Digital twin technology enhances the quality and efficiency of assembly process planning and execution system.

Originality/value

It provides a new perspective for assembly process planning and execution, the architecture, connections and data acquisition approaches of the digital twin-driven framework are proposed in this paper, which is of important theoretical values. What is more, a smart assembly workbench is developed, the specific image classification algorithms are presented in detail too, which is of some industrial application values.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

1 – 10 of 147