Search results

1 – 10 of over 4000
Article
Publication date: 23 August 2011

Shanchun Wei, Meng Kong, Tao Lin and Shanben Chen

This paper aims to develop a method to achieve automatic robotic welding and seam tracking so that three‐dimensional weld seam could be tracked without teaching and good welding…

Abstract

Purpose

This paper aims to develop a method to achieve automatic robotic welding and seam tracking so that three‐dimensional weld seam could be tracked without teaching and good welding formation could be accomplished.

Design/methodology/approach

Adaptive image processing method was used for various types of weld seam. Also the relationship between welding height and arc signal was calibrated. Through the decomposition and synthesis, three‐dimensional space type weld seam could be extracted and tracked well. The workpiece without teaching was finally tracked precisely and in a timely way with use of the fuzzy controller.

Findings

Composite sensing technology including arc and visual sensing had obvious advantages. Image processing method could be used for tracking plane weld seam efficiently while arc sensing could characterize welding height. Through the coupled controlling algorithm, arc sensing and visual sensing could be fused effectively.

Research limitations/implications

How to couple information more accurately and quickly was still one of the most important problems in composite sensing technology.

Practical implications

Composite sensing technology could reduce costs to achieve weld seam instead such expensive device as laser sensor. The simulating parts of scalloped segment of bottom board for rockets were tracked in the project. Once more adaptive algorithms were developed, more complicated practical workpieces could be dealt with in robotic welding which promotes the application of industry robots.

Originality/value

A useful method for three‐dimensional space type weld seam tracking without teaching was developed. The whole procedure of adaptive image processing method was simple but efficient and robust. The coupled controlling strategy addressed could accomplish seam tracking by composite sensing technology.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 March 2008

Benjamin T. Schmidt, Joseph M. Feduska, Ashley M. Witt and Bridget M. Deasy

The purpose of this paper is to focus on the advantages of a robotic time‐lapsed microscopic imaging system for tracking stem cells in in vitro biological assays which measure…

Abstract

Purpose

The purpose of this paper is to focus on the advantages of a robotic time‐lapsed microscopic imaging system for tracking stem cells in in vitro biological assays which measure stem cell activities.

Design/methodology/approach

The unique aspects of the system include robotic movement of stem cell culture flasks which enables selection of a large number of regions of interest for data collection. Numerous locations of a cell culture flask can be explored and selected for time‐lapsed analysis. The system includes an environmentally controlled chamber to maintain experimental conditions including temperature, gas levels, and humidity, such that stem cells can be tracked by visible and epifluorescence imaging over extended periods of time.

Findings

This is an extremely unique system for both individual cell tracking and cell population tracking in real‐time with high‐throughput experimental capability. In comparison to a conventional manual cell culture and assay approach, this system provides stem cell biologists with the ability to quantify numerous and unique temporal changes in stem cell populations, this drastically reduces man‐hours, consumes fewer laboratory resources and provides standardization to biological assays.

Research limitations/implications

Fundamental basic biology questions can be addressed using this approach.

Practical implications

Stem cells are often available only in small numbers – due both to their inherent low frequency in the post‐natal tissue as compared to somatic cells, and their slow growth rates. The unique capabilities of this robotic cell culture system allow for the study of cell populations which are few in number.

Originality/value

The robotic time‐lapsed imaging system is a novel approach to stem cell research.

Details

Industrial Robot: An International Journal, vol. 35 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 January 2009

Meng Kong and Shanben Chen

The purpose of this paper is to describe work aimed to control the Al alloy welding penetration through the passive vision for welding robot.

Abstract

Purpose

The purpose of this paper is to describe work aimed to control the Al alloy welding penetration through the passive vision for welding robot.

Design/methodology/approach

First a passive vision system was established. The system can capture the Al alloy welding image. Based on the analysis of the characteristic of the welding image, the composite edge detectors were developed to recognize the shape of the weld seam and the weld pool. To realize the automatic control of the Al alloy‐weld process, the relation between the welding parameter and the quality of the weld appearance was established through the random welding experiment. The wire feed was chosen with PID controller adjusting the wire feed rate according to the weld gap variation.

Findings

This paper finds that the passive vision system can be captured the clear weld seam and weld pool image simultaneously. the method of composite edge detectors can be effectively and accurately recognize the weld seam edges. The wire feed rate controller ensured the welding robot to adjust the wire feed rate according to the gap variation.

Research limitations/implications

This system has been applied to the industrial welding robot production.

Originality/value

The weld seam and weld pool image can be simultaneously captured by the passive vision system. The composite edge detectors have been developed for the passive vision method. The controller has been set up for Al alloy welding process based on the neural network.

Details

Sensor Review, vol. 29 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 30 May 2008

V. Giuliani, B. de Witt, M. Salluzzi, R.J. Hugo and P. Gu

Particle velocity is a critical factor that can affect the deposition quality in manufacturing processes involving the use of a laser source and a powder‐particle delivery nozzle…

Abstract

Purpose

Particle velocity is a critical factor that can affect the deposition quality in manufacturing processes involving the use of a laser source and a powder‐particle delivery nozzle. The purpose of this paper is to propose a method to detect the speed and trajectory of particles during a laser deposition process.

Design/methodology/approach

A low‐power laser light sheet technique is used to illuminate particles emerging from a custom designed powder delivery nozzle. Light scattered by the particles is detected by a high‐speed camera. Image processing on the acquired images was performed using both edge detection and Hough transform algorithms.

Findings

The experimental data were analyzed and used to estimate particle velocity, trajectory and the velocity profile at the nozzle exit. The results have demonstrated that the particle trajectory remains linear between the nozzle exit and the deposition plate and that the particle velocity can be considered a constant.

Originality/value

The use of lowpower laser light sheet illumination facilitates the detection of isolated particle streaks even in high‐powder flow rate condition. Identification of particle streaks in three subsequent images ensures that particle velocity vectors are in the plane of illumination, and also offers the potential to evaluate in a single measurement both velocity and particle size based on the observed scattered characteristics. The method provides a useful simple tool to investigate particle dynamics in a rapid prototyping application as well as other research fields involving the use of powder delivery nozzles.

Details

Rapid Prototyping Journal, vol. 14 no. 3
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 16 April 2024

Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…

Abstract

Purpose

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.

Design/methodology/approach

This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.

Findings

In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.

Originality/value

In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 23 November 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…

Abstract

Purpose

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.

Design/methodology/approach

A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.

Findings

The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.

Originality/value

The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 February 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively deployed to perform repetitive and simple tasks at high speed to reduce production time and improve productivity. In most cases, a compliant…

Abstract

Purpose

Industrial robots are extensively deployed to perform repetitive and simple tasks at high speed to reduce production time and improve productivity. In most cases, a compliant gripper is used for assembly tasks such as peg-in-hole assembly. A compliant mechanism in the gripper introduces flexibility that may cause oscillation in the grasped object. Such a flexible gripper–object system can be considered as an under-actuated object held by the gripper and the oscillations can be attributed to transient disturbance of the robot itself. The commercially available robots do not have a control mechanism to reduce such induced vibration. Thus, this paper aims to propose a contactless vision-based approach for vibration suppression which uses a predictive vibrational amplitude error-based second-stage controller.

Design/methodology/approach

The proposed predictive vibrational amplitude error-based second-stage controller is a real-time vibration control strategy that uses predicted error to estimate the second-stage controller output. Based on controller output, input trajectories were estimated for the internal controller of the robot. The control strategy efficiently handles the system delay to execute the control input trajectories when the oscillating object is at an extreme position.

Findings

The present controller works along with the internal controller of the robot without any interruption to suppress the residual vibration of the object. To demonstrate the robustness of the proposed controller, experimental implementation on Asea Brown Boveri make industrial robot (IRB) 1410 robot with a low frame rate camera has been carried out. In this experiment, two objects have been considered that have a low (<2.38 Hz) and high (>2.38 Hz) natural frequency. The proposed controller can suppress 95% of vibration amplitude in less than 3 s and reduce the stability time by 90% for a peg-in-hole assembly task.

Originality/value

The present vibration control strategy uses a camera with a low frame rate (25 fps) and the delays are handled intelligently to favour suppression of high-frequency vibration. The mathematical model and the second-stage controller implemented suppress vibration without modifying the robot dynamical model and the internal controller.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 25 October 2021

Venkata Dasu Marri, Veera Narayana Reddy P. and Chandra Mohan Reddy S.

Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image

Abstract

Purpose

Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy.

Design/methodology/approach

This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image.

Findings

The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods.

Originality/value

In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 3 January 2017

Iryna Borshchova and Siu O’Young

The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must…

Abstract

Purpose

The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must be highly accurate and meet the size, weigh, and power restrictions of a small UAV.

Design/methodology/approach

The vision-based landing system consists of a pattern of red markers placed on a moving target, an image processing algorithm for pattern detection, and a servo-control for tracking. The suggested approach uses a color-based object detection and image-based visual servoing.

Findings

The developed prototype system has demonstrated the capability of landing within 25 cm of the desired point of touchdown. This auto-landing system is small (100×100 mm), light-weight (100 g), and consumes little power (under 2 W).

Originality/value

The novelty and the main contribution of the suggested approach are a creative combination of work in two fields: image processing and controls as applied to the UAV landing. The developed image processing algorithm has low complexity as compared to other known methods, which allows its implementation on general-purpose low-cost hardware. The theoretical design has been verified systematically via simulations and then outdoors field tests.

Details

International Journal of Intelligent Unmanned Systems, vol. 5 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 23 January 2009

Ruzairi Abdul Rahim, Chiam Kok Thiam, Jaysuman Pusppanathan and Yvette Shaan‐Li Susiapan

The purpose of this paper is to view the flow concentration of the flowing material in a pipeline conveyor.

Abstract

Purpose

The purpose of this paper is to view the flow concentration of the flowing material in a pipeline conveyor.

Design/methodology/approach

Optical tomography provides a method to view the cross sectional image of flowing materials in a pipeline conveyor. Important flow information such as flow concentration profile, flow velocity and mass flow rate can be obtained without the need to invade the process vessel. The utilization of powerful computer together with expensive data acquisition system (DAQ) as the processing device in optical tomography systems has always been a norm. However, the advancements in silicon fabrication technology nowadays allow the fabrication of powerful digital signal processors (DSP) at reasonable cost. This allows the technology to be applied in optical tomography system to reduce or even eliminate the need of personal computer and the DAQ. The DSP system was customized to control the data acquisition of 16 × 16 optical sensors (arranged in orthogonal projection) and 23 × 23 optical sensors (arranged in rectilinear projections). The data collected were used to reconstruct the cross sectional image of flowing materials inside the pipeline. In the developed system, the accuracy of the image reconstruction was increased by 12.5 per cent by using new hybrid image reconstruction algorithm.

Findings

The results proved that the data acquisition and image reconstruction algorithm is capable of acquiring accurate data to reconstruct cross sectional images with only little error compared to the expected measurements.

Originality/value

The DSP system was customized to control the data acquisition of 16 × 16 optical sensors (arranged in orthogonal projection) and 23 × 23 optical sensors (arranged in rectilinear projections).

Details

Sensor Review, vol. 29 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of over 4000