Search results

1 – 10 of 858
Article
Publication date: 16 April 2018

Hanieh Deilamsalehy and Timothy C. Havens

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping…

Abstract

Purpose

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.

Design/methodology/approach

Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.

Findings

The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Originality/value

To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 March 2006

Mingjun Zhang, Weimin Tao, William Fisher and Tzyh‐Jong Tarn

For semiconductor and gene‐chip microarray fabrication, robots are widely used to handle workpieces. It is critical that robots can calibrate themselves regularly and estimate…

Abstract

Purpose

For semiconductor and gene‐chip microarray fabrication, robots are widely used to handle workpieces. It is critical that robots can calibrate themselves regularly and estimate workpiece pose automatically. This paper proposes an industrial method for automatic robot calibration and workpiece pose estimation.

Design/methodology/approach

The methods have been implemented using an air‐pressure sensor and a laser sensor.

Findings

Experimental results conducted in an industrial manufacturing environment show efficiency of the methods.

Originality/value

The contribution of this paper consists of an industrial solution to automatic robot calibration and workpiece pose estimation for automatic semiconductor and gene‐chip microarray fabrication.

Details

Industrial Robot: An International Journal, vol. 33 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 September 2020

Farhad Shamsfakhr and Bahram Sadeghi Bigham

In this paper, an attempt has been made to develop an algorithm equipped with geometric pattern registration techniques to perform exact, robust and fast robot localization purely…

Abstract

Purpose

In this paper, an attempt has been made to develop an algorithm equipped with geometric pattern registration techniques to perform exact, robust and fast robot localization purely based on laser range data.

Design/methodology/approach

The expected pose of the robot on a pre-calculated map is in the form of simulated sensor readings. To obtain the exact pose of the robot, segmentation of both real laser range and simulated laser range readings is performed. Critical points on two scan sets are extracted from the segmented range data and thereby the pose difference is computed by matching similar parts of the scans and calculating the relative translation.

Findings

In contrast to other self-localization algorithms based on particle filters and scan matching, the proposed method, in common positioning scenarios, provides a linear cost with respect to the number of sensor particles, making it applicable to real-time resource-limited embedded robots. The proposed method is able to obtain a sensibly accurate estimate of the relative pose of the robot even in non-occluded but partially visible segments conditions.

Originality/value

A comparison of state-of-the-art localization techniques has shown that geometrical scan registration algorithm is superior to the other localization methods based on scan matching in accuracy, processing speed and robustness to large positioning errors. Effectiveness of the proposed method has been demonstrated by conducting a series of real-world experiments.

Details

Assembly Automation, vol. 40 no. 6
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 18 January 2019

Farhad Shamsfakhr, Bahram Sadeghi Bigham and Amirreza Mohammadi

Robot localization in dynamic, cluttered environments is a challenging problem because it is impractical to have enough knowledge to be able to accurately model the robot’s…

Abstract

Purpose

Robot localization in dynamic, cluttered environments is a challenging problem because it is impractical to have enough knowledge to be able to accurately model the robot’s environment in such a manner. This study aims to develop a novel probabilistic method equipped with function approximation techniques which is able to appropriately model the data distribution in Markov localization by using the maximum statistical power, thereby making a sensibly accurate estimation of robot’s pose in extremely dynamic, cluttered indoors environments.

Design/methodology/approach

The parameter vector of the statistical model is in the form of positions of easily detectable artificial landmarks in omnidirectional images. First, using probabilistic principal component analysis, the most likely set of parameters of the environmental model are extracted from the sensor data set consisting of missing values. Next, we use these parameters to approximate a probability density function, using support vector regression that is able to calculate the robot’s pose vector in each state of the Markov localization. At the end, using this density function, a good approximation of conditional density associated with the observation model is made which leads to a sensibly accurate estimation of robot’s pose in extremely dynamic, cluttered indoors environment.

Findings

The authors validate their method in an indoor office environment with 34 unique artificial landmarks. Further, they show that the accuracy remains high, even when they significantly increase the dynamics of the environment. They also show that compared to those appearance-based localization methods that rely on image pixels, the proposed localization strategy is superior in terms of accuracy and speed of convergence to a global minima.

Originality/value

By using easily detectable, and rotation, scale invariant artificial landmarks and the maximum statistical power which is provided through the concept of missing data, the authors have succeeded in determining precise pose updates without requiring too many computational resources to analyze the omnidirectional images. In addition, the proposed approach significantly reduces the risk of getting stuck in a local minimum by eliminating the possibility of having similar states.

Details

Engineering Computations, vol. 36 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 17 May 2022

Lin Li, Xi Chen and Tie Zhang

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this…

Abstract

Purpose

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this paper is to propose a pose estimation method for grasping metal workpieces by industrial robots.

Design/methodology/approach

Dual-hypothesis robust point matching registration network (RPM-Net) is proposed to estimate pose from point cloud. The proposed method uses the Point Cloud Library (PCL) to segment workpiece point cloud from scenes and a trained-well robust point matching registration network to estimate pose through dual-hypothesis point cloud registration.

Findings

In the experiment section, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor. A data set that contains three subsets is set up on the experimental platform. After training with the emulation data set, the dual-hypothesis RPM-Net is tested on the experimental data set, and the success rates of the three real data sets are 94.0%, 92.0% and 96.0%, respectively.

Originality/value

The contributions are as follows: first, dual-hypothesis RPM-Net is proposed which can realize the pose estimation of discrete and less-textured metal workpieces from point cloud, and second, a method of making training data sets is proposed using only CAD models with the visualization algorithm of the PCL.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 June 2017

Chen-Chien Hsu, Cheng-Kai Yang, Yi-Hsing Chien, Yin-Tien Wang, Wei-Yen Wang and Chiang-Heng Chien

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases…

Abstract

Purpose

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms.

Design/methodology/approach

As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates.

Findings

Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy.

Originality/value

The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 19 June 2017

Ramazan Havangi

Simultaneous localization and mapping (SLAM) is the problem of determining the pose (position and orientation) of an autonomous robot moving through an unknown environment. The…

Abstract

Purpose

Simultaneous localization and mapping (SLAM) is the problem of determining the pose (position and orientation) of an autonomous robot moving through an unknown environment. The classical FastSLAM is a well-known solution to SLAM. In FastSLAM, a particle filter is used for the robot pose estimation, and the Kalman filter (KF) is used for the feature location’s estimation. However, the performance of the conventional FastSLAM is inconsistent. To tackle this problem, this study aims to propose a mutated FastSLAM (MFastSLAM) using soft computing.

Design/methodology/approach

The proposed method uses soft computing. In this approach, particle swarm optimization (PSO) estimator is used for the robot’s pose estimation and an adaptive neuro-fuzzy unscented Kalman filter (ANFUKF) is used for the feature location’s estimation. In ANFUKF, a neuro-fuzzy inference system (ANFIS) supervises the performance of the unscented Kalman filter (UKF) with the aim of reducing the mismatch between the theoretical and actual covariance of the residual sequences to get better consistency.

Findings

The simulation and experimental results indicate that the consistency and estimated accuracy of the proposed algorithm are superior FastSLAM.

Originality/value

The main contribution of this paper is the introduction of MFastSLAM to solve the problems of FastSLAM.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 June 2017

Janusz Marian Bedkowski and Timo Röhling

This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser…

Abstract

Purpose

This paper aims to focus on real-world mobile systems, and thus propose relevant contribution to the special issue on “Real-world mobile robot systems”. This work on 3D laser semantic mobile mapping and particle filter localization dedicated for robot patrolling urban sites is elaborated with a focus on parallel computing application for semantic mapping and particle filter localization. The real robotic application of patrolling urban sites is the goal; thus, it has been shown that crucial robotic components have reach high Technology Readiness Level (TRL).

Design/methodology/approach

Three different robotic platforms equipped with different 3D laser measurement system were compared. Each system provides different data according to the measured distance, density of points and noise; thus, the influence of data into final semantic maps has been compared. The realistic problem is to use these semantic maps for robot localization; thus, the influence of different maps into particle filter localization has been elaborated. A new approach has been proposed for particle filter localization based on 3D semantic information, and thus, the behavior of particle filter in different realistic conditions has been elaborated. The process of using proposed robotic components for patrolling urban site, such as the robot checking geometrical changes of the environment, has been detailed.

Findings

The focus on real-world mobile systems requires different points of view for scientific work. This study is focused on robust and reliable solutions that could be integrated with real applications. Thus, new parallel computing approach for semantic mapping and particle filter localization has been proposed. Based on the literature, semantic 3D particle filter localization has not yet been elaborated; thus, innovative solutions for solving this issue have been proposed. Recently, a semantic mapping framework that was already published was developed. For this reason, this study claimed that the authors’ applied studies during real-world trials with such mapping system are added value relevant for this special issue.

Research limitations/implications

The main problem is the compromise between computer power and energy consumed by heavy calculations, thus our main focus is to use modern GPGPU, NVIDIA PASCAL parallel processor architecture. Recent advances in GPGPUs shows great potency for mobile robotic applications, thus this study is focused on increasing mapping and localization capabilities by improving the algorithms. Current limitation is related with the number of particles processed by a single processor, and thus achieved performance of 500 particles in real-time is the current limitation. The implication is that multi-GPU architectures for increasing the number of processed particle can be used. Thus, further studies are required.

Practical implications

The research focus is related to real-world mobile systems; thus, practical aspects of the work are crucial. The main practical application is semantic mapping that could be used for many robotic applications. The authors claim that their particle filter localization is ready to integrate with real robotic platforms using modern 3D laser measurement system. For this reason, the authors claim that their system can improve existing autonomous robotic platforms. The proposed components can be used for detection of geometrical changes in the scene; thus, many practical functionalities can be applied such as: detection of cars, detection of opened/closed gate, etc. […] These functionalities are crucial elements of the safe and security domain.

Social implications

Improvement of safe and security domain is a crucial aspect of modern society. Protecting critical infrastructure plays an important role, thus introducing autonomous mobile platforms capable of supporting human operators of safe and security systems could have a positive impact if viewed from many points of view.

Originality/value

This study elaborates the novel approach of particle filter localization based on 3D data and semantic mapping. This original work could have a great impact on the mobile robotics domain, and thus, this study claims that many algorithmic and implementation issues were solved assuming real-task experiments. The originality of this work is influenced by the use of modern advanced robotic systems being a relevant set of technologies for proper evaluation of the proposed approach. Such a combination of experimental hardware and original algorithms and implementation is definitely an added value.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 October 2016

Heber Sobreira, A. Paulo Moreira, Paulo Costa and José Lima

This paper aims to address a mobile robot localization system that avoids using a dedicated laser scanner, making it possible to reduce implementation costs and the robot’s size…

Abstract

Purpose

This paper aims to address a mobile robot localization system that avoids using a dedicated laser scanner, making it possible to reduce implementation costs and the robot’s size. The system has enough precision and robustness to meet the requirements of industrial environments.

Design/methodology/approach

Using an algorithm for artificial beacon detection combined with a Kalman Filter and an outlier rejection method, it was possible to enhance the precision and robustness of the overall localization system.

Findings

Usually, industrial automatic guide vehicles feature two kinds of lasers: one for navigation placed on top of the robot and another for obstacle detection (security lasers). Recently, security lasers extended their output data with obstacle distance (contours) and reflectivity. These new features made it possible to develop a novel localization system based on a security laser.

Research limitations/implications

Once the proposed methodology is completely validated, in the future, a scheme for global localization and failure detection should be addressed.

Practical implications

This paper presents a comparison between the presented approach and a commercial localization system for industry. The proposed algorithms were tested in an industrial application under realistic working conditions.

Social implications

The presented methodology represents a gain in the effective cost of the mobile robot platform, as it discards the need for a dedicated laser for localization purposes.

Originality/value

This paper presents a novel approach that benefits from the presence of a security laser on mobile robots (mandatory sensor when considering industrial applications), using it simultaneously with other sensors, not only to guarantee safety conditions during operation but also to locate the robot in the environment. This paper is also valuable because of the comparison made with a commercialized system, as well as the tests conducted in real industrial environments, which prove that the approach presented is suitable for working under these demanding conditions.

Details

Industrial Robot: An International Journal, vol. 43 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 April 2021

Boyoung Kim, Minyong Choi, Seung-Woo Son, Deokwon Yun and Sukjune Yoon

Many manufacturing sites require precision assembly. Particularly, similar to cell phones, assembly at the sub-mm scale is not easy, even for humans. In addition, the system…

223

Abstract

Purpose

Many manufacturing sites require precision assembly. Particularly, similar to cell phones, assembly at the sub-mm scale is not easy, even for humans. In addition, the system should assemble each part with adequate force and avoid breaking the circuits with excessive force. The purpose of this study is to assemble high precision components with relatively reasonable vision devices compared to previous studies.

Design/methodology/approach

This paper presents a vision-force guided precise assembly system using a force sensor and two charge coupled device (CCD) cameras without an expensive 3-dimensional (3D) sensor or computer-aided design model. The system accurately estimates 6 degrees-of-freedom (DOF) poses from a 2D image in real time and assembles parts with the proper force.

Findings

In this experiment, three connectors are assembled on a printed circuit board. This system obtains high accuracy under 1 mm and 1 degree error, which shows that this system is effective.

Originality/value

This is a new method for sub-mm assembly using only two CCD cameras and one force sensor.

Details

Assembly Automation, vol. 41 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 858