Search results
1 – 10 of 701Yingpeng Dai, Jiehao Li, Junzheng Wang, Jing Li and Xu Liu
This paper aims to focus on lane detection of unmanned mobile robots. For the mobile robot, it is undesirable to spend lots of time detecting the lane. So quickly detecting the…
Abstract
Purpose
This paper aims to focus on lane detection of unmanned mobile robots. For the mobile robot, it is undesirable to spend lots of time detecting the lane. So quickly detecting the lane in a complex environment such as poor illumination and shadows becomes a challenge.
Design/methodology/approach
A new learning framework based on an integration of extreme learning machine (ELM) and an inception structure named multiscale ELM is proposed, making full use of the advantages that ELM has faster convergence and convolutional neural network could extract local features in different scales. The proposed architecture is divided into two main components: self-taught feature extraction by ELM with the convolution layer and bottom-up information classification based on the feature constraint. To overcome the disadvantages of poor performance under complex conditions such as shadows and illumination, this paper mainly solves four problems: local features learning: replaced the fully connected layer, the convolutional layer is used to extract local features; feature extraction in different scales: the integration of ELM and inception structure improves the parameters learning speed, but it also achieves spatial interactivity in different scales; and the validity of the training database: a method how to find a training data set is proposed.
Findings
Experimental results on various data sets reveal that the proposed algorithm effectively improves performance under complex conditions. In the actual environment, experimental results tested by the robot platform named BIT-NAZA show that the proposed algorithm achieves better performance and reliability.
Originality/value
This research can provide a theoretical and engineering basis for lane detection on unmanned robots.
Details
Keywords
Halim Merabti and Khaled Belarbi
Rapid solution methods are still a challenge for difficult optimization problems among them those arising in nonlinear model predictive control. The particle swarm optimization…
Abstract
Purpose
Rapid solution methods are still a challenge for difficult optimization problems among them those arising in nonlinear model predictive control. The particle swarm optimization algorithm has shown its potential for the solution of some problems with an acceptable computation time. In this paper, we use an accelerated version of PSO for the solution of simple and multiobjective nonlinear MBPC for unmanned vehicles (mobile robots and quadcopter) for tracking trajectories and obstacle avoidance. The AµPSO-NMPC was applied to control a LEGO mobile robot for the tracking of a trajectory without and with obstacles avoidance one.
Design/methodology/approach
The accelerated PSO and the NMPC are used to control unmanned vehicles for tracking trajectories and obstacle avoidance.
Findings
The results of the experiments are very promising and show that AµPSO can be considered as an alternative to the classical solution methods.
Originality/value
The computation time is less than 0.02 ms using an Intel Core i7 with 8GB of RAM.
Details
Keywords
The purpose of this article is to illustrate how sensors impart perceptive capabilities to robots. This is the second part of a two-part article. This second part considers…
Abstract
Purpose
The purpose of this article is to illustrate how sensors impart perceptive capabilities to robots. This is the second part of a two-part article. This second part considers positional awareness and sensing in the external environment, notably but not exclusively by autonomous, mobile robots.
Design/methodology/approach
Following a short introduction, this article first discusses positional sensing and navigation by mobile robots, including self-driving cars, automated guided vehicles, unmanned aerial vehicles (UAVs) and autonomous underwater vehicles (AUVs). It then considers sensing with UAVs and AUVs, and finally discusses robots for hazard detection. Brief concluding comments are drawn.
Findings
This shows that sensors based on a multitude of techniques confer navigational capabilities to mobile robots, including LIDARs, radar, sonar, imaging and inertial sensing devices. UAVs, AUVs and mobile terrestrial robots can be equipped with all manner of sensors to create detailed terrestrial and underwater maps, monitor air and water quality, locate pollution and detect hazards. While existing sensors are used widely, many new devices are now being developed to meet specific requirements and to comply with size, weight and cost restraints.
Originality/value
The use of mobile robots is growing rapidly, and this article provides a timely account of how sensors confer them with positional awareness and allow them to act as mobile sensing platforms.
Details
Keywords
Alejandro Ramirez‐Serrano, Hubert Liu and Giovanni C. Pettinaro
The purpose of this paper is to address the online localization of mobile (service) robots in real world dynamic environments. Most of the techniques developed so far have been…
Abstract
Purpose
The purpose of this paper is to address the online localization of mobile (service) robots in real world dynamic environments. Most of the techniques developed so far have been designed for static environments. What is presented here is a novel technique for mobile robot localization in quasi‐dynamic environments.
Design/methodology/approach
The proposed approach employs a probability grid map and Baye's filtering techniques. The former is used for representing the possible changes in the surrounding environment which a robot might have to face.
Findings
Simulation and experimental results show that this approach has a high degree of robustness by taking into account both sensor and world uncertainty. The methodology has been tested under different environment scenarios where diverse complex objects having different sizes and shapes were used to represent movable and non‐movable entities.
Practical implications
The results can be applied to diverse robotic systems that need to move in changing indoor environments such as hospitals and places where people might require assistance from autonomous robotic devices. The methodology is fast, efficient and can be used in fast‐moving robots, allowing them to perform complex operations such as path planning and navigation in real time.
Originality/value
What is proposed here is a novel mobile robot localization approach that enables unmanned vehicles to effectively move in real time and know their current location in dynamic environments. Such an approach consists of two steps: a generation of the probability grid map; and a recursive position estimation methodology employing a variant of the Baye's filter.
Ravinder Singh and Kuldeep Singh Nagla
The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation…
Abstract
Purpose
The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation. Autonomous navigation is an emerging segment in the field of mobile robot in which the mobile robot navigates in the environment with high level of autonomy by lacking human interactions. Sensor-based perception is a prevailing aspect in the autonomous navigation of mobile robot along with localization and path planning. Various range sensors are used to get the efficient perception of the environment, but selecting the best-fit sensor to solve the navigation problem is still a vital assignment.
Design/methodology/approach
Autonomous navigation relies on the sensory information of various sensors, and each sensor relies on various operational parameters/characteristic for the reliable functioning. A simple strategy shown in this proposed study to select the best-fit sensor based on various parameters such as environment, 2 D/3D navigation, accuracy, speed, environmental conditions, etc. for the reliable autonomous navigation of a mobile robot.
Findings
This paper provides a comparative analysis for the diverse range sensors used in mobile robotics with respect to various aspects such as accuracy, computational load, 2D/3D navigation, environmental conditions, etc. to opt the best-fit sensors for achieving robust navigation of autonomous mobile robot.
Originality/value
This paper provides a straightforward platform for the researchers to select the best range sensor for the diverse robotics application.
Details
Keywords
The purpose of this paper is to investigate the effect on time to complete a task depending on how a human operator interacts with a mobile‐robot. Interaction is investigated…
Abstract
Purpose
The purpose of this paper is to investigate the effect on time to complete a task depending on how a human operator interacts with a mobile‐robot. Interaction is investigated using two tele‐operated mobile‐robot systems, three different ways of interacting with robots and several different environments. The speed of a tele‐operator in completing progressively more complicated driving tasks is investigated also.
Design/methodology/approach
Tele‐operators are timed completing a series of tasks using a joystick to control a mobile‐robot. They either watch the robot while operating it, or sit at a computer and view scenes remotely on a screen. Cameras are either mounted on the robot, or so that they view both the environment and robot. Tele‐operators complete tests both with and without sensors. One robot system uses an umbilical cable and one uses a radio link.
Findings
In simple environments, a tele‐operator may perform better without a sensor system to assist them but in more complicated environments then a tele‐operator may perform better with a sensor system to assist. Tele‐operators may also tend to perform better with a radio link than with an umbilical connection. Tele‐operators sometimes perform better with a camera mounted on the robot compared with pre‐mounted cameras observing the environment (but that depends on tasks being performed).
Research limitations/implications
Tele‐operated systems rely heavily on visual feedback and experienced operators. This paper investigates how to make tasks easier.
Practical implications
The paper suggests that the amount of sensor support should be varied depending on circumstances.
Originality/value
Results show that human tele‐operators perform better without the assistance of a sensor systems in simple environments.
Details
Keywords
The following article is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal…
Abstract
Purpose
The following article is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry engineer-turned entrepreneur regarding the evolution, commercialization and challenges of bringing a technological invention to market.
Design/methodology/approach
The interviewee is innovator Helen Greiner, Founder and CEO of CyPhy Works. Ms Greiner describes her technical and business experiences delivering ground robots into the industrial, consumer and military markets, which led to her pioneering flying robot solutions.
Findings
Helen Greiner received a bachelor’s degree in mechanical engineering and a master’s degree in computer science, both from MIT. She also holds an honorary doctor of engineering degree from Worcester Polytechnic Institute. Greiner is one of the three co-founders of iRobot Corp (Nasdaq: IRBT) and served as iRobot’s Vice President of Engineering (1990-1994), President (1994-2008), and Chairman (2004-2008). She founded CyPhy Works in 2008. Greiner has also served as the President, Board Member for the Robotics Technology Consortium; a Trustee for MIT; and is currently a Trustee for the Boston Museum of Science.
Originality/value
Inspired as a child by the movie Star Wars, Greiner’s life goal has been to create robots. Greiner was one of three people that founded iRobot Corporation and developed a culture of innovation that led to the Roomba Autonomous Vacuuming Robot. There are now more than 12 million Roombas worldwide. She also led iRobot’s entry into the military marketplace with the creation and deployment of over 6,000 PackBot robots. Greiner has received many awards and honors for her contributions in technology innovation and business leadership. She was named by the Kennedy School at Harvard in conjunction with the US News and World Report as one of America’s Best Leaders and was honored by the Association for Unmanned Vehicle Systems International with the prestigious Pioneer Award. She has also been honored as a Technology Review Magazine “Innovator for the Next Century” and has been awarded the DEMO God Award and DEMO Lifetime Achievement Award. She was named one of the Ernst & Young New England Entrepreneurs of the Year, invited to the World Economic Forum as a Global Leader of Tomorrow and Young Global Leader and has been inducted in the Women in Technology International Hall of Fame.
Details
Keywords
Junlin Cheng, Peiyu Ma, Qiang Ruan, Yezhuo Li and Qianqian Zhang
The purpose of this paper is to propose an overall deformation rolling mechanism based on double four-link mechanism. The double quadrilateral mobile mechanism (DQMM) has two…
Abstract
Purpose
The purpose of this paper is to propose an overall deformation rolling mechanism based on double four-link mechanism. The double quadrilateral mobile mechanism (DQMM) has two switchable working modes which can be used to traverse different terrains or climb over obstacles.
Design/methodology/approach
The main body of the DQMM is composed of a double four-link mechanism which sharing a public link and two symmetrical steering platforms which placed at both ends of the four-link mechanism. The steering platforms give the DQMM not only steering ability but also reconnaissance ability which can be achieved by carrying sensors such as cameras on steering platforms. By controlling the deformation of the DQMM, it can switch between two working modes (tracked rolling mode and obstacle-climbing mode) to achieve the functions of rolling and obstacle-climbing. Dynamic simulation model was established to verify the feasibility.
Findings
Based on the kinematics analysis and simulation results of the DQMM, its moving function is realized by the tracked rolling mode, and the obstacle-climbing mode is used to climb over obstacles in structured terrains such as continuous stairs. The feasibility of the two working modes is verified on a physical prototype.
Originality/value
The work of this paper is a new exploration of applying “overall closed moving linkages mechanism” to the area of small mobile mechanisms. The adaptability of different terrains and the ability of obstacle-climbing are improved by the combination of multi-modes.
Details
Keywords
The purpose of this paper is to investigate the effect on completion of mobile‐robot tasks depending on how a human tele‐operator interacts with a sensor system and a mobile‐robot.
Abstract
Purpose
The purpose of this paper is to investigate the effect on completion of mobile‐robot tasks depending on how a human tele‐operator interacts with a sensor system and a mobile‐robot.
Design/methodology/approach
Interaction is investigated using two mobile‐robot systems, three different ways of interacting with the robots and several different environments of increasing complexity. In each case, the operation is investigated with and without sensor systems to assist an operator to move a robot through narrower and narrower gaps and in completing progressively more complicated driving tasks. Tele‐operators used a joystick and either watched the robot while operating it, or sat at a computer and viewed scenes remotely on a screen. Cameras are either mounted on the robot to view the space ahead of the robot or mounted remotely so that they viewed both the environment and robot. Every test is compared with sensor systems engaged and with them disconnected.
Findings
A main conclusion is that human tele‐operators perform better without the assistance of sensor systems in simple environments and in those cases it may be better to switch‐off the sensor systems or reduce their effect. In addition, tele‐operators sometimes performed better with a camera mounted on the robot compared with pre‐mounted cameras observing the environment (but that depended on tasks being performed).
Research limitations/implications
Tele‐operators completed tests both with and without sensors. One robot system used an umbilical cable and one used a radio link.
Practical implications
The paper quantifies the difference between tele‐operation control and sensor‐assisted control when a robot passes through narrow passages. This could be an useful information when system designers decide if a system should be tele‐operated, automatic or sensor‐assisted. The paper suggests that in simple environments then the amount of sensor support should be small but in more complicated environments then more sensor support needs to be provided.
Originality/value
The paper investigates the effect of completing mobile‐robot tasks depending on whether a human tele‐operator uses a sensor system or not and how they interact with the sensor system and the mobile‐robot. The paper presents the results from investigations using two mobile‐robot systems, three different ways of interacting with the robots and several different environments of increasing complexity. The change in the ability of a human operator to complete progressively more complicated driving tasks with and without a sensor system is presented and the human tele‐operators performed better without the assistance of sensor systems in simple environments.
Details
Keywords
The purpose of this paper is to investigate the effect of time delay on the ability of a human operator to complete a task with a teleoperated mobile‐robot using two systems, two…
Abstract
Purpose
The purpose of this paper is to investigate the effect of time delay on the ability of a human operator to complete a task with a teleoperated mobile‐robot using two systems, two different ways of interacting with the mobile‐robots and several different environments.
Design/methodology/approach
Teleoperators are observed completing a series of tasks using a joystick to control a mobile‐robot while time delays are introduced to the system. They sit at a computer and view scenes remotely on a screen. Cameras are either mounted on the robot or mounted externally so that they view both the environment and robot. Teleoperators complete the tests both with and without sensors. One robot system uses an umbilical cable and one uses a radio link.
Findings
In simple environments, a teleoperator may perform better without a sensor system to assist them but as time delays are introduced then there are more failures. In more complicated environments or when time delays are longer, then teleoperators perform better with a sensor system to assist. Teleoperators may also tend to perform better with a radio link than with an umbilical connection.
Research limitations/implications
Teleoperated systems rely heavily on visual feedback and experienced operators. This paper investigates the effect of introducing a delay to the delivery of that visual feedback.
Practical implications
The paper suggests that in simple environments with short time delays then the amount of sensor support should be small but in more complicated environments or with longer delays then more sensor support needs to be provided.
Originality/value
Results from imposing time delays on a teleoperated mobile‐robot are presented. Effects on the task of different ways of viewing activity on a computer display are presented, that is with cameras mounted on the robot or cameras mounted externally to view both the environment and robot. Results from using sensors to assist teleoperators are presented. The paper suggests that the amount of sensor support should be varied depending on circumstances.
Details