Search results

1 – 10 of 426
Article
Publication date: 17 October 2022

Jiayue Zhao, Yunzhong Cao and Yuanzhi Xiang

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to…

Abstract

Purpose

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to the complex construction environment, and the monitoring methods based on sensor equipment cost too much. This paper aims to introduce computer vision and deep learning technologies to propose the YOLOv5-FastPose (YFP) model to realize the pose estimation of construction machines by improving the AlphaPose human pose model.

Design/methodology/approach

This model introduced the object detection module YOLOv5m to improve the recognition accuracy for detecting construction machines. Meanwhile, to better capture the pose characteristics, the FastPose network optimized feature extraction was introduced into the Single-Machine Pose Estimation Module (SMPE) of AlphaPose. This study used Alberta Construction Image Dataset (ACID) and Construction Equipment Poses Dataset (CEPD) to establish the dataset of object detection and pose estimation of construction machines through data augmentation technology and Labelme image annotation software for training and testing the YFP model.

Findings

The experimental results show that the improved model YFP achieves an average normalization error (NE) of 12.94 × 103, an average Percentage of Correct Keypoints (PCK) of 98.48% and an average Area Under the PCK Curve (AUC) of 37.50 × 103. Compared with existing methods, this model has higher accuracy in the pose estimation of the construction machine.

Originality/value

This study extends and optimizes the human pose estimation model AlphaPose to make it suitable for construction machines, improving the performance of pose estimation for construction machines.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 April 2024

Jacqueline Humphries, Pepijn Van de Ven, Nehal Amer, Nitin Nandeshwar and Alan Ryan

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored…

Abstract

Purpose

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot.

Design/methodology/approach

A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field.

Findings

Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement. R2 is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences.

Originality/value

In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Open Access
Article
Publication date: 22 August 2023

Mahesh Babu Purushothaman and Kasun Moolika Gedara

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and…

1323

Abstract

Purpose

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and embedded cameras) that aids in manual lifting human pose deduction, analysis and training in the construction sector.

Design/methodology/approach

Using a pragmatic approach combined with the literature review, this study discusses the SVBM. The research method includes a literature review followed by a pragmatic approach and lab validation of the acquired data. Adopting the practical approach, the authors of this article developed an SVBM, an AI program to correlate computer vision (recorded and live videos using mobile and embedded cameras).

Findings

Results show that SVBM observes the relevant events without additional attachments to the human body and compares them with the standard axis to identify abnormal postures using mobile and other cameras. Angles of critical nodal points are projected through human pose detection and calculating body part movement angles using a novel software program and mobile application. The SVBM demonstrates its ability to data capture and analysis in real-time and offline using videos recorded earlier and is validated for program coding and results repeatability.

Research limitations/implications

Literature review methodology limitations include not keeping in phase with the most updated field knowledge. This limitation is offset by choosing the range for literature review within the last two decades. This literature review may not have captured all published articles because the restriction of database access and search was based only on English. Also, the authors may have omitted fruitful articles hiding in a less popular journal. These limitations are acknowledged. The critical limitation is that the trust, privacy and psychological issues are not addressed in SVBM, which is recognised. However, the benefits of SVBM naturally offset this limitation to being adopted practically.

Practical implications

The theoretical and practical implications include customised and individualistic prediction and preventing most posture-related hazardous behaviours before a critical injury happens. The theoretical implications include mimicking the human pose and lab-based analysis without attaching sensors that naturally alter the working poses. SVBM would help researchers develop more accurate data and theoretical models close to actuals.

Social implications

By using SVBM, the possibility of early deduction and prevention of musculoskeletal disorders is high; the social implications include the benefits of being a healthier society and health concerned construction sector.

Originality/value

Human pose detection, especially joint angle calculation in a work environment, is crucial to early deduction of muscoloskeletal disorders. Conventional digital technology-based methods to detect pose flaws focus on location information from wearables and laboratory-controlled motion sensors. For the first time, this paper presents novel computer vision (recorded and live videos using mobile and embedded cameras) and digital image-related deep learning methods without attachment to the human body for manual handling pose deduction and analysis of angles, neckline and torso line in an actual construction work environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1061

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 23 January 2024

Guoyang Wan, Yaocong Hu, Bingyou Liu, Shoujun Bai, Kaisheng Xing and Xiuwen Tao

Presently, 6 Degree of Freedom (6DOF) visual pose measurement methods enjoy popularity in the industrial sector. However, challenges persist in accurately measuring the visual…

Abstract

Purpose

Presently, 6 Degree of Freedom (6DOF) visual pose measurement methods enjoy popularity in the industrial sector. However, challenges persist in accurately measuring the visual pose of blank and rough metal casts. Therefore, this paper introduces a 6DOF pose measurement method utilizing stereo vision, and aims to the 6DOF pose measurement of blank and rough metal casts.

Design/methodology/approach

This paper studies the 6DOF pose measurement of metal casts from three aspects: sample enhancement of industrial objects, optimization of detector and attention mechanism. Virtual reality technology is used for sample enhancement of metal casts, which solves the problem of large-scale sample sampling in industrial application. The method also includes a novel deep learning detector that uses multiple key points on the object surface as regression objects to detect industrial objects with rotation characteristics. By introducing a mixed paths attention module, the detection accuracy of the detector and the convergence speed of the training are improved.

Findings

The experimental results show that the proposed method has a better detection effect for metal casts with smaller size scaling and rotation characteristics.

Originality/value

A method for 6DOF pose measurement of industrial objects is proposed, which realizes the pose measurement and grasping of metal blanks and rough machined casts by industrial robots.

Details

Sensor Review, vol. 44 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 April 2024

Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…

Abstract

Purpose

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.

Design/methodology/approach

This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.

Findings

In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.

Originality/value

In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 4 April 2024

Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…

Abstract

Purpose

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.

Design/methodology/approach

This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.

Findings

This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.

Originality/value

The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 23 November 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…

Abstract

Purpose

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.

Design/methodology/approach

A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.

Findings

The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.

Originality/value

The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Book part
Publication date: 25 October 2023

Md Sakib Ullah Sourav, Huidong Wang, Mohammad Raziuddin Chowdhury and Rejwan Bin Sulaiman

One of the most neglected sources of energy loss is streetlights that generate too much light in areas where it is not required. Energy waste has enormous economic and…

Abstract

One of the most neglected sources of energy loss is streetlights that generate too much light in areas where it is not required. Energy waste has enormous economic and environmental effects. In addition, due to the conventional manual nature of operation, streetlights are frequently seen being turned ‘ON’ during the day and ‘OFF’ in the evening, which is regrettable even in the twenty-first century. These issues require automated streetlight control in order to be resolved. This study aims to develop a novel streetlight controlling method by combining a smart transport monitoring system powered by computer vision technology with a closed circuit television (CCTV) camera that allows the light-emitting diode (LED) streetlight to automatically light up with the appropriate brightness by detecting the presence of pedestrians or vehicles and dimming the streetlight in their absence using semantic image segmentation from the CCTV video streaming. Consequently, our model distinguishes daylight and nighttime, which made it feasible to automate the process of turning the streetlight ‘ON’ and ‘OFF’ to save energy consumption costs. According to the aforementioned approach, geo-location sensor data could be utilised to make more informed streetlight management decisions. To complete the tasks, we consider training the U-net model with ResNet-34 as its backbone. Validity of the models is guaranteed with the use of assessment matrices. The suggested concept is straightforward, economical, energy-efficient, long-lasting and more resilient than conventional alternatives.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

1 – 10 of 426