Search results

1 – 10 of over 85000
Article
Publication date: 23 November 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…

Abstract

Purpose

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.

Design/methodology/approach

A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.

Findings

The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.

Originality/value

The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 22 August 2023

Mahesh Babu Purushothaman and Kasun Moolika Gedara

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and…

1278

Abstract

Purpose

This pragmatic research paper aims to unravel the smart vision-based method (SVBM), an AI program to correlate the computer vision (recorded and live videos using mobile and embedded cameras) that aids in manual lifting human pose deduction, analysis and training in the construction sector.

Design/methodology/approach

Using a pragmatic approach combined with the literature review, this study discusses the SVBM. The research method includes a literature review followed by a pragmatic approach and lab validation of the acquired data. Adopting the practical approach, the authors of this article developed an SVBM, an AI program to correlate computer vision (recorded and live videos using mobile and embedded cameras).

Findings

Results show that SVBM observes the relevant events without additional attachments to the human body and compares them with the standard axis to identify abnormal postures using mobile and other cameras. Angles of critical nodal points are projected through human pose detection and calculating body part movement angles using a novel software program and mobile application. The SVBM demonstrates its ability to data capture and analysis in real-time and offline using videos recorded earlier and is validated for program coding and results repeatability.

Research limitations/implications

Literature review methodology limitations include not keeping in phase with the most updated field knowledge. This limitation is offset by choosing the range for literature review within the last two decades. This literature review may not have captured all published articles because the restriction of database access and search was based only on English. Also, the authors may have omitted fruitful articles hiding in a less popular journal. These limitations are acknowledged. The critical limitation is that the trust, privacy and psychological issues are not addressed in SVBM, which is recognised. However, the benefits of SVBM naturally offset this limitation to being adopted practically.

Practical implications

The theoretical and practical implications include customised and individualistic prediction and preventing most posture-related hazardous behaviours before a critical injury happens. The theoretical implications include mimicking the human pose and lab-based analysis without attaching sensors that naturally alter the working poses. SVBM would help researchers develop more accurate data and theoretical models close to actuals.

Social implications

By using SVBM, the possibility of early deduction and prevention of musculoskeletal disorders is high; the social implications include the benefits of being a healthier society and health concerned construction sector.

Originality/value

Human pose detection, especially joint angle calculation in a work environment, is crucial to early deduction of muscoloskeletal disorders. Conventional digital technology-based methods to detect pose flaws focus on location information from wearables and laboratory-controlled motion sensors. For the first time, this paper presents novel computer vision (recorded and live videos using mobile and embedded cameras) and digital image-related deep learning methods without attachment to the human body for manual handling pose deduction and analysis of angles, neckline and torso line in an actual construction work environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 22 July 2022

Ying Tao Chai and Ting-Kwei Wang

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection…

Abstract

Purpose

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection of surface defects requires inspectors to judge, evaluate and make decisions, which requires sufficient experience and is time-consuming and labor-intensive, and the expertise cannot be effectively preserved and transferred. In addition, the evaluation standards of different inspectors are not identical, which may lead to cause discrepancies in inspection results. Although computer vision can achieve defect recognition, there is a gap between the low-level semantics acquired by computer vision and the high-level semantics that humans understand from images. Therefore, computer vision and ontology are combined to achieve intelligent evaluation and decision-making and to bridge the above gap.

Design/methodology/approach

Combining ontology and computer vision, this paper establishes an evaluation and decision-making framework for concrete surface quality. By establishing concrete surface quality ontology model and defect identification quantification model, ontology reasoning technology is used to realize concrete surface quality evaluation and decision-making.

Findings

Computer vision can identify and quantify defects, obtain low-level image semantics, and ontology can structurally express expert knowledge in the field of defects. This proposed framework can automatically identify and quantify defects, and infer the causes, responsibility, severity and repair methods of defects. Through case analysis of various scenarios, the proposed evaluation and decision-making framework is feasible.

Originality/value

This paper establishes an evaluation and decision-making framework for concrete surface quality, so as to improve the standardization and intelligence of surface defect inspection and potentially provide reusable knowledge for inspecting concrete surface quality. The research results in this paper can be used to detect the concrete surface quality, reduce the subjectivity of evaluation and improve the inspection efficiency. In addition, the proposed framework enriches the application scenarios of ontology and computer vision, and to a certain extent bridges the gap between the image features extracted by computer vision and the information that people obtain from images.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 1 March 1987

Bill Vogeley

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Abstract

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Details

Sensor Review, vol. 7 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 8 February 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively deployed to perform repetitive and simple tasks at high speed to reduce production time and improve productivity. In most cases, a compliant…

Abstract

Purpose

Industrial robots are extensively deployed to perform repetitive and simple tasks at high speed to reduce production time and improve productivity. In most cases, a compliant gripper is used for assembly tasks such as peg-in-hole assembly. A compliant mechanism in the gripper introduces flexibility that may cause oscillation in the grasped object. Such a flexible gripper–object system can be considered as an under-actuated object held by the gripper and the oscillations can be attributed to transient disturbance of the robot itself. The commercially available robots do not have a control mechanism to reduce such induced vibration. Thus, this paper aims to propose a contactless vision-based approach for vibration suppression which uses a predictive vibrational amplitude error-based second-stage controller.

Design/methodology/approach

The proposed predictive vibrational amplitude error-based second-stage controller is a real-time vibration control strategy that uses predicted error to estimate the second-stage controller output. Based on controller output, input trajectories were estimated for the internal controller of the robot. The control strategy efficiently handles the system delay to execute the control input trajectories when the oscillating object is at an extreme position.

Findings

The present controller works along with the internal controller of the robot without any interruption to suppress the residual vibration of the object. To demonstrate the robustness of the proposed controller, experimental implementation on Asea Brown Boveri make industrial robot (IRB) 1410 robot with a low frame rate camera has been carried out. In this experiment, two objects have been considered that have a low (<2.38 Hz) and high (>2.38 Hz) natural frequency. The proposed controller can suppress 95% of vibration amplitude in less than 3 s and reduce the stability time by 90% for a peg-in-hole assembly task.

Originality/value

The present vibration control strategy uses a camera with a low frame rate (25 fps) and the delays are handled intelligently to favour suppression of high-frequency vibration. The mathematical model and the second-stage controller implemented suppress vibration without modifying the robot dynamical model and the internal controller.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Abstract

Details

Rewriting Leadership with Narrative Intelligence: How Leaders Can Thrive in Complex, Confusing and Contradictory Times
Type: Book
ISBN: 978-1-78756-776-4

Article
Publication date: 28 March 2023

Cengiz Deniz

The aim of this study is to create a robust and simple collision avoidance approach based on quaternion algebra for vision-based pick and place applications in manufacturing…

Abstract

Purpose

The aim of this study is to create a robust and simple collision avoidance approach based on quaternion algebra for vision-based pick and place applications in manufacturing industries, specifically for use with industrial robots and collaborative robots (cobots).

Design/methodology/approach

In this study, an approach based on quaternion algebra is developed to prevent any collision or breakdown during the movements of industrial robots or cobots in vision system included pick and place applications. The algorithm, integrated into the control system, checks for collisions before the robot moves its end effector to the target position during the process flow. In addition, a hand–eye calibration method is presented to easily calibrate the camera and define the geometric relationships between the camera and the robot coordinate systems.

Findings

This approach, specifically designed for vision-based robot/cobot applications, can be used by developers and robot integrator companies to significantly reduce application costs and the project timeline of the pick and place robotics system installation. Furthermore, the approach ensures a safe, robust and highly efficient application for robotics vision applications across all industries, making it an ideal solution for various industries.

Originality/value

The algorithm for this approach, which can be operated in a robot controller or a programmable logic controller, has been tested as real-time in vision-based robotics applications. It can be applied to both existing and new vision-based pick and place projects with industrial robots or collaborative robots with minimal effort, making it a cost-effective and efficient solution for various industries.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 October 2018

Biao Mei, Weidong Zhu and Yinglin Ke

Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to…

296

Abstract

Purpose

Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to conduct positioning variation analysis and control for an automated drilling to achieve a high positioning accuracy.

Design/methodology/approach

The nominal and varied connective models of automated drilling are constructed for positioning variation analysis regarding automated drilling. The principle of a strategy for reducing positioning variation in drilling, which shortens the positioning variation chain with the aid of an industrial camera-based vision system, is explored. Moreover, other strategies for positioning variation control are developed based on mathematical analysis to further reduce the position errors of the drilled fastener holes.

Findings

The propagation and accumulation of an automated drilling system’s positioning variation are explored. The principle of reducing positioning variation in an automated drilling using a monocular vision system is discussed from the view of variation chain.

Practical implications

The strategies for reducing positioning variation, rooted in the constructed positioning variation models, have been applied to a machine-tool based automated drilling system. The system is developed for a wing assembly of an aircraft in the Aviation Industry Corporation of China.

Originality/value

Propagation, accumulation and control of positioning variation in an automated drilling are comprehensively explored. Based on this, the positioning accuracy in an automated drilling is controlled below 0.13 mm, which can meet the requirement for the assembly of the aircraft.

Article
Publication date: 21 August 2017

Yassine Bouteraa and Ismail Ben Abdallah

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a…

Abstract

Purpose

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a handling robot with a low cost but effective solution.

Design/methodology/approach

The developed approach is based on three different techniques to be able to ensure movement and pattern recognition of the operator’s arm as well as an effective control of the object manipulation task. In the first, the methodology works on the kinect-based gesture recognition of the operator’s arm. However, using only the vision-based approach for hand posture recognition cannot be the suitable solution mainly when the hand is occluded in such situations. The proposed approach supports the vision-based system by an electromyography (EMG)-based biofeedback system for posture recognition. Moreover, the novel approach appends to the vision system-based gesture control and the EMG-based posture recognition a force feedback to inform operator of the real grasping state.

Findings

The main finding is to have a robust method able to gesture-based control a robot manipulator during movement, manipulation and grasp. The proposed approach uses a real-time gesture control technique based on a kinect camera that can provide the exact position of each joint of the operator’s arm. The developed solution integrates also an EMG biofeedback and a force feedback in its control loop. In addition, the authors propose a high-friendly human-machine-interface (HMI) which allows user to control in real time a robotic arm. Robust trajectory tracking challenge has been solved by the implementation of the sliding mode controller. A fuzzy logic controller has been implemented to manage the grasping task based on the EMG signal. Experimental results have shown a high efficiency of the proposed approach.

Research limitations/implications

There are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. This can damage the manipulated object during the teleoperation process. In this case, such operator skills are highly required.

Practical implications

The developed control approach can be used in all applications, which require real-time human robot cooperation.

Originality/value

The main advantage of the developed approach is that it benefits at the same time of three various techniques: EMG biofeedback, vision-based system and haptic feedback. In such situation, using only vision-based approaches mainly for the hand postures recognition is not effective. Therefore, the recognition should be based on the biofeedback naturally generated by the muscles responsible of each posture. Moreover, the use of force sensor in closed-loop control scheme without operator intervention is ineffective in the special cases in which the manipulated objects vary in a wide range with different metallic characteristics. Therefore, the use of human-in-the-loop technique can imitate the natural human postures in the grasping task.

Details

Industrial Robot: An International Journal, vol. 44 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 October 2010

Alexander Kaiser and Birgit Fordinal

The purpose of this paper is to introduce a new type of ba, called “vocation ba” and to describe the main aspects of this type of ba as well as its methods.

1691

Abstract

Purpose

The purpose of this paper is to introduce a new type of ba, called “vocation ba” and to describe the main aspects of this type of ba as well as its methods.

Design/methodology/approach

The paper reviews the literature in the field of self‐transcending knowledge and the concept of ba and shows the main aspects for the design of a new methodology and framework. Additionally it analyzes experiences with the new method from several case studies.

Findings

First the concept of vocation ba describes a space on the individual level as well as on the collective level for the generation of self‐transcending knowledge. Second the method of Vocation‐coachingWaVe is a helpful method within the vocation ba. The experiences with these two new concepts from several case studies are very encouraging.

Research limitations/implications

The number of case studies at the collective level is still limited, as the authors have been working with the method of Vocation‐coachingWaVe at the collective level for two years. At the moment further research is done in larger systems.

Practical implications

This study gives insight and information about the method of Vocation‐coachingWaVe and the concept of vocation ba.

Originality/value

The paper presents one of the few studies, which theoretically and practically deals with the aspect of self‐transcending knowledge in the context of vision development processes and knowledge‐based management on the individual level as well as on the collective level. The method of Vocation‐coachingWaVe at the collective level is a continuous approach of a bottom‐up vision development process.

Details

Journal of Knowledge Management, vol. 14 no. 6
Type: Research Article
ISSN: 1367-3270

Keywords

1 – 10 of over 85000