Search results

1 – 10 of 317
Article
Publication date: 27 April 2012

Qingxiao Yu, Can Yuan, Z. Fu and Yanzheng Zhao

Recently, service robots have been widely used in various fields. The purpose of this paper is to design a restaurant service robot which could be applicable to provide basic…

1544

Abstract

Purpose

Recently, service robots have been widely used in various fields. The purpose of this paper is to design a restaurant service robot which could be applicable to provide basic service, such as ordering, fetching and sending food, settlement and so on, for the customers in the robot restaurant.

Design/methodology/approach

Based on characteristics of wheeled mobile robots, the service robot with two manipulators is designed. Constrained by the DOF, the final positioning accuracy within ±3 cm must be guaranteed to successfully grasp the plate. Segmented positioning method is applied considering the positioning costs and accuracy requirement in the different stages, and the shape‐based matching tracking method is adopted to navigate the robot to the object.

Findings

Experiments indicate that the service robot could successfully grasp the plate, from wherever is its initial position; and the proposed algorithms could estimate the robot pose well and accurately evaluate the localization performance.

Research limitations/implications

At present, the service robot could only work in an indoor environment where there is steady illumination.

Practical implications

The service robot is applicable to provide basic service for the customers in the robot restaurant.

Originality/value

The paper gives us a concept of a restaurant service robot and its localization and navigation algorithms. The service robot could provide its real‐time coordinates and arrive at the object with ±2 cm positioning precision, from wherever is its initial position.

Details

Industrial Robot: An International Journal, vol. 39 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 April 2013

Dominik Belter and Piotr Skrzypczynski

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured…

Abstract

Purpose

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured environments. The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom. This capability is essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.

Design/methodology/approach

The Parallel Tracking and Mapping (PTAM) algorithm and the Inertial Measurement Unit (IMU) are used to determine the 6‐d.o.f. pose of a walking robot. Bundle‐adjustment‐based tracking and structure reconstruction are applied to obtain precise camera poses from the monocular vision data. The inclination of the robot's platform is determined by using IMU. The self‐localization system is used together with the RRT‐based motion planner, which allows to walk autonomously on rough, previously unknown terrain. The presented system operates on‐line on the real hexapod robot. Efficiency and precision of the proposed solution are demonstrated by experimental data.

Findings

The PTAM‐based self‐localization system enables the robot to walk autonomously on rough terrain. The software operates on‐line and can be implemented on the robot's on‐board PC. Results of the experiments show that the position error is small enough to allow robust elevation mapping using the laser scanner. In spite of the unavoidable feet slippages, the walking robot which uses PTAM for self‐localization can precisely estimate its position and successfully recover from motion execution errors.

Research limitations/implications

So far the presented self‐localization system was tested in limited‐scale indoor experiments. Experiments with more realistic outdoor scenarios are scheduled as further work.

Practical implications

Precise self‐localization may be one of the most important factors enabling the use of walking robots in practical USAR missions. The results of research on precise self‐localization in 6‐d.o.f. may be also useful for autonomous robots in other application areas: construction, agriculture, military.

Originality/value

The vision‐based self‐localization algorithm used in the presented research is not new, but the contribution lies in its implementation/integration on a walking robot, and experimental evaluation in the demanding problem of precise self‐localization in rough terrain.

Details

Industrial Robot: An International Journal, vol. 40 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 April 2014

Annette Mossel, Michael Leichtfried, Christoph Kaltenriner and Hannes Kaufmann

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board…

Abstract

Purpose

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board processing unit. Thereby, the approach is independent from additional ground hardware and the UAV core unit can be easily replaced with more powerful hardware that simplifies setup updates as well as maintenance. The paper aims to discuss these issues.

Design/methodology/approach

The UAV is able to map, locate and navigate in an unknown indoor environment fusing vision-based tracking with inertial and attitude measurements. The authors choose an algorithmic approach for mapping and localization that does not require GPS coverage of the target area; therefore autonomous indoor navigation is made possible.

Findings

The authors demonstrate the UAVs capabilities of mapping, localization and navigation in an unknown 2D marker environment. The promising results enable future research on 3D self-localization and dense mapping using mobile hardware as the only on-board processing unit.

Research limitations/implications

The proposed autonomous flight processing pipeline robustly tracks and maps planar markers that need to be distributed throughout the tracking volume.

Practical implications

Due to the cost-effective platform and the flexibility of the software architecture, the approach can play an important role in areas with poor infrastructure (e.g. developing countries) to autonomously perform tasks for search and rescue, inspection and measurements.

Originality/value

The authors provide a low-cost off-the-shelf flight platform that only requires a commercially available mobile device as core processing unit for autonomous flight in GPS-denied areas.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 11 January 2023

Yongyao Li, Guanyu Ding, Chao Li, Sen Wang, Qinglei Zhao and Qi Song

This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the…

Abstract

Purpose

This paper presents a comprehensive pallet-picking approach for forklift robots, comprising a pallet identification and localization algorithm (PILA) to detect and locate the pallet and a vehicle alignment algorithm (VAA) to align the vehicle fork arms with the targeted pallet.

Design/methodology/approach

Opposing vision-based methods or point cloud data strategies, we utilize a low-cost RGB-D camera, and thus PILA exploits both RGB and depth data to quickly and precisely recognize and localize the pallet. The developed method guarantees a high identification rate from RGB images and more precise 3D localization information than a depth camera. Additionally, a deep neural network (DNN) method is applied to detect and locate the pallet in the RGB images. Specifically, the point cloud data is correlated with the labeled region of interest (RoI) in the RGB images, and the pallet's front-face plane is extracted from the point cloud. Furthermore, PILA introduces a universal geometrical rule to identify the pallet's center as a “T-shape” without depending on specific pallet types. Finally, VAA is proposed to implement the vehicle approaching and pallet picking operations as a “proof-of-concept” to test PILA’s performance.

Findings

Experimentally, the orientation angle and centric location of the two kinds of pallets are investigated without any artificial marking. The results show that the pallet could be located with a three-dimensional localization accuracy of 1 cm and an angle resolution of 0.4 degrees at a distance of 3 m with the vehicle control algorithm.

Research limitations/implications

PILA’s performance is limited by the current depth camera’s range (< = 3 m), and this is expected to be improved by using a better depth measurement device in the future.

Originality/value

The results demonstrate that the pallets can be located with an accuracy of 1cm along the x, y, and z directions and affording an angular resolution of 0.4 degrees at a distance of 3m in 700ms.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 13 December 2017

Huiyu Sun, Guangming Song, Zhong Wei and Ying Zhang

This paper aims to tele-operate the movement of an unmanned aerial vehicle (UAV) in the obstructed environment with asymmetric time-varying delays. A simple passive proportional…

Abstract

Purpose

This paper aims to tele-operate the movement of an unmanned aerial vehicle (UAV) in the obstructed environment with asymmetric time-varying delays. A simple passive proportional velocity errors plus damping injection (P-like) controller is proposed to deal with the asymmetric time-varying delays in the aerial teleoperation system.

Design/methodology/approach

This paper presents both theoretical and real-time experimental results of the bilateral teleoperation system of a UAV for collision avoidance over the wireless network. First, a position-velocity workspace mapping is used to solve the master-slave kinematic/dynamic dissimilarity. Second, a P-like controller is proposed to ensure the stability of the time-delayed bilateral teleoperation system with asymmetric time-varying delays. The stability is analyzed by the Lyapunov–Krasovskii function and the delay-dependent stability criteria are obtained under linear-matrix-inequalities conditions. Third, a vision-based localization is presented to calibrate the UAV’s pose and provide the relative distance for obstacle avoidance with a high accuracy. Finally, the performance of the teleoperation scheme is evaluated by both human-in-the-loop simulations and real-time experiments where a single UAV flies through the obstructed environment.

Findings

Experimental results demonstrate that the teleoperation system can maintain passivity and collision avoidance can be achieved with a high accuracy for asymmetric time-varying delays. Moreover, the operator could tele-sense the force reflection to improve the maneuverability in the aerial teleoperation.

Originality/value

A real-time bilateral teleoperation system of a UAV for collision avoidance is performed in the laboratory. A force and visual interface is designed to provide force and visual feedback of the slave environment to the operator.

Details

Industrial Robot: An International Journal, vol. 45 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 June 2017

Chen-Chien Hsu, Cheng-Kai Yang, Yi-Hsing Chien, Yin-Tien Wang, Wei-Yen Wang and Chiang-Heng Chien

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases…

Abstract

Purpose

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms.

Design/methodology/approach

As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates.

Findings

Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy.

Originality/value

The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 11 January 2008

O. Reinoso, A. Gil, L. Payá and M. Juliá

This paper aims to present a teleoperation system that allows one to control a group of mobile robots in a collaborative manner. In order to show the capabilities of the…

Abstract

Purpose

This paper aims to present a teleoperation system that allows one to control a group of mobile robots in a collaborative manner. In order to show the capabilities of the collaborative teleoperation system, it seeks to present a task where the operator collaborates with a robot team to explore a remote environment in a coordinated manner. The system implements human‐robot interaction by means of natural language interfaces, allowing one to teleoperate multiple mobile robots in an unknown, unstructured environment. With the supervision of the operator, the robot team builds a map of the environment with a vision‐based simultaneous localization and mapping (SLAM) technique. The approach is well suited for search and rescue tasks and other applications where the operator may guide the exploration of the robots to certain areas in the map.

Design/methodology/approach

In opposition with a master‐slave scheme of teleoperation, an exploration mechanism is proposed that allows one to integrate the commands expressed by a human operator in an exploration task, where the actions expressed by the human operator are taken as an advice. In consequence, the robots in the remote environment choose their movements that improve the estimation of the map and best suit the requirements of the operator.

Findings

It is shown that the collaborative mechanism presented is suitable to control a robot team that explores an unstructured environment. Experimental results are presented that demonstrate the validity of the approach.

Practical implications

The system implements human‐robot interaction by means of natural language interfaces. The robots are equipped with stereo heads and are able to find stable visual landmarks in the environment. Based on these visual landmarks, the robot team is able to build a map of the environment using a vision‐based SLAM technique. SONAR proximity sensors are used to avoid collisions and find traversable ways. The robot control architecture is based on common object request broker architecture technology and allows one to operate a group of robots with dissimilar features.

Originality/value

This work extends the concept of collaborative teleoperation to the exploration of a remote environment using a team of mobile robots.

Details

Industrial Robot: An International Journal, vol. 35 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 April 2003

Paolo Pirjanian, Niklas Karlsson, Luis Goncalves and Enrico Di Bernardo

One difficult problem in robotics is localization: the ability of a mobile robot to determine its position in the environment. Roboticists around the globe have been working to…

Abstract

One difficult problem in robotics is localization: the ability of a mobile robot to determine its position in the environment. Roboticists around the globe have been working to find a solution to localization for more than 20 years; however, only in the past 4‐5 years we have seen some promising results. In this work, we describe a first‐of‐a‐kind, breakthrough technology for localization that requires only one low‐cost camera (less than 50USD) and odometry to provide localization. Because of its low‐cost and robust performance in realistic environments, this technology is particularly well‐suited for use in consumer and commercial applications.

Details

Industrial Robot: An International Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 May 2010

S. Hamidreza Kasaei, S. Mohammadreza Kasaei, S. Alireza Kasaei, S. Amir Hassan Monadjemi and Mohsen Taheri

The purpose of this paper is to design and implement a team of middle size soccer robots to conform RoboCup middle‐size league.

Abstract

Purpose

The purpose of this paper is to design and implement a team of middle size soccer robots to conform RoboCup middle‐size league.

Design/methodology/approach

First, according to the rules of RoboCup, a middle size soccer robot was designed. The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni‐directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. This soccer robot equips the laptop computer system and interface circuits to make decisions.

Findings

In fact, the omni‐directional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and target tracking. The boundary‐following algorithm is applied to find the important features of the field. The sensor data fusion method is utilized in the control system parameters, self‐localization, and world modeling. A vision‐based self‐localization, and the conventional odometry systems are fused for robust self‐localization. The localization algorithm includes filtering, sharing, and integration of the data for different types of objects recognized in the environment.

Originality/value

This paper presents results of research work in the field of autonomous robot‐middle size soccer robot supported by IAU‐Khorasgan Branch (Isfahan).

Details

Industrial Robot: An International Journal, vol. 37 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 October 2018

Mariusz Oszust, Tomasz Kapuscinski, Dawid Warchol, Marian Wysocki, Tomasz Rogalski, Jacek Pieniazek, Grzegorz Henryk Kopecki, Piotr Ciecinski and Pawel Rzucidlo

This paper aims to present a vision-based method for determination of the position of a fixed-wing aircraft that is approaching a runway.

Abstract

Purpose

This paper aims to present a vision-based method for determination of the position of a fixed-wing aircraft that is approaching a runway.

Design methodology/approach

The method determines the location of an aircraft based on positions of precision approach path indicator lights and approach light system with sequenced flashing lights in the image captured by an on-board camera.

Findings

As the relation of the lighting systems to the touchdown area on the considered runway is known in advance, the detected lights, seen as glowing lines or highlighted areas, in the image can be mapped onto the real-world coordinates and then used to estimate the position of the aircraft. Furthermore, the colours of lights are detected and can be used as auxiliary information.

Practical implications

The presented method can be considered as a potential source of flight data for autonomous approach and for augmentation of manual approach.

Originality/value

In this paper, a feasibility study of this concept is presented and primarily validated.

Details

Aircraft Engineering and Aerospace Technology, vol. 90 no. 6
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of 317