Search results
1 – 10 of over 52000Yipeng Zhu, Tao Wang and Shiqiang Zhu
This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and…
Abstract
Purpose
This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system.
Design/methodology/approach
Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance.
Findings
A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person.
Originality/value
This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.
Details
Keywords
Myagmarbayar Nergui, Yuki Yoshida, Nevrez Imamoglu, Jose Gonzalez, Masashi Sekine and Wenwei Yu
The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on…
Abstract
Purpose
The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on observation data, and providing automatically calling for medical personnel in emergency situations. The robots to be developed will bring about cost‐effective, safe and easier at‐home rehabilitation to most motor‐function impaired patients (MIPs).
Design/methodology/approach
The paper has developed following programs/control algorithms: control algorithms for a mobile robot to track and follow human motions, to measure human joint trajectories, and to calculate angles of lower limb joints; and algorithms for recognizing human gait behaviours based on the calculated joints angle data.
Findings
A Hidden Markov Model (HMM) based human gait behaviour recognition taking lower limb joint angles and body angle as input was proposed. The proposed HMM based gait behaviour recognition is compared with the Nearest Neighbour (NN) classification methods. Experimental results showed that a human gait behaviour recognition using HMM can be achieved from the lower limb joint trajectory with higher accuracy than compared classification methods.
Originality/value
The research addresses human motion tracking and recognition by a mobile robot. Human gait behaviour recognition is HMM based lower limb joints and body angle data from extracted from kinect sensor at the mobile robot.
Details
Keywords
K. Satya Sujith and G. Sasikala
Object detection models have gained considerable popularity as they aid in lot of applications, like monitoring, video surveillance, etc. Object detection through the video…
Abstract
Purpose
Object detection models have gained considerable popularity as they aid in lot of applications, like monitoring, video surveillance, etc. Object detection through the video tracking faces lot of challenges, as most of the videos obtained as the real time stream are affected due to the environmental factors.
Design/methodology/approach
This research develops a system for crowd tracking and crowd behaviour recognition using hybrid tracking model. The input for the proposed crowd tracking system is high density crowd videos containing hundreds of people. The first step is to detect human through visual recognition algorithms. Here, a priori knowledge of location point is given as input to visual recognition algorithm. The visual recognition algorithm identifies the human through the constraints defined within Minimum Bounding Rectangle (MBR). Then, the spatial tracking model based tracks the path of the human object movement in the video frame, and the tracking is carried out by extraction of color histogram and texture features. Also, the temporal tracking model is applied based on NARX neural network model, which is effectively utilized to detect the location of moving objects. Once the path of the person is tracked, the behaviour of every human object is identified using the Optimal Support Vector Machine which is newly developed by combing SVM and optimization algorithm, namely MBSO. The proposed MBSO algorithm is developed through the integration of the existing techniques, like BSA and MBO.
Findings
The dataset for the object tracking is utilized from Tracking in high crowd density dataset. The proposed OSVM classifier has attained improved performance with the values of 0.95 for accuracy.
Originality/value
This paper presents a hybrid high density video tracking model, and the behaviour recognition model. The proposed hybrid tracking model tracks the path of the object in the video through the temporal tracking and spatial tracking. The features train the proposed OSVM classifier based on the weights selected by the proposed MBSO algorithm. The proposed MBSO algorithm can be regarded as the modified version of the BSO algorithm.
Details
Keywords
Laura Duarte, Mohammad Safeea and Pedro Neto
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no…
Abstract
Purpose
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications.
Design/methodology/approach
Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception.
Findings
Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured.
Originality/value
Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).
Details
Keywords
M. Fatih Talu, Servet Soyguder and Ömür Aydogmus
The purpose of the paper is to present an approach to detect and isolate the sensor failures, using a bank of extended Kalman filters (EKFs) using an innovative initialization of…
Abstract
Purpose
The purpose of the paper is to present an approach to detect and isolate the sensor failures, using a bank of extended Kalman filters (EKFs) using an innovative initialization of covariance matrix using system dynamics.
Design/methodology/approach
The EKF is developed for nonlinear flight dynamic estimation of a spacecraft and the effects of the sensor failures using a bank of Kalman filters in investigated. The approach is to develop fast convergence Kalman filter algorithm based on covariance matrix computation for rapid sensor fault detection. The proposed nonlinear filter has been tested and compared with the classical Kalman filter schemes via simulations performed on the model of a space vehicle; this simulation activity has shown the benefits of the novel approach.
Findings
In the simulations, the rotational dynamics of a spacecraft dynamic model are considered, and the sensor failures are detected and isolated.
Research limitations/implications
A novel fast convergence Kalman filter for detection and isolation of faulty sensors applied to the three axis spacecraft attitude control problem is examined and an effective approach to isolate the faulty sensor measurements is proposed. Advantages of using innovative initialization of covariance matrix are presented in the paper. The proposed scheme enhances the improvement in estimation accuracy. The proposed method takes advantage of both the fast convergence capability and the robustness of numerical stability. Quaternion‐based initialization of the covariance matrix is not considered in this paper.
Originality/value
A new fast converging Kalman filter for sensor fault detection and isolation by innovative initialization of covariance matrix applied to a nonlinear spacecraft dynamic model is examined and an effective approach to isolate the measurements from failed sensors is proposed. An EKF has been developed for the nonlinear dynamic estimation of an orbiting spacecraft. The proposed methodology detects and decides if and where a sensor fault has occurred, isolates the faulty sensor, and outputs the corresponding healthy sensor measurement.
Details
Keywords
Gilbert Tang, Seemal Asif and Phil Webb
The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable…
Abstract
Purpose
The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely.
Design/methodology/approach
The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task.
Findings
It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots.
Originality/value
The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.
Details
Keywords
Toan Van Nguyen, Minh Hoang Do and Jaewon Jo
To follow and maintain an appropriate distance to the selected target person (STP), the mobile robot is required to have capabilities: the human detection and tracking and an…
Abstract
Purpose
To follow and maintain an appropriate distance to the selected target person (STP), the mobile robot is required to have capabilities: the human detection and tracking and an efficient following strategy with a smooth manner that does not appear threatening to the STP and surroundings. The efficient following strategy must integrate the STP position and the obstacle information to achieve smooth and safe human-following behaviors, especially in unknown environments where robot does not have understandings in advance. The purpose of this study is to propose a robust-adaptive-behavior strategy for mobile robots.
Design/methodology/approach
This paper presents a robust-adaptive-behavior strategy (RABS) based on the fuzzy inference mechanism to help the robot follow the STP effectively in various unknown environments with the real-time obstacle avoidance, both indoor and outdoor and on different robot platforms. In which, the traversability of robots’ unknown surrounding environments is analyzed by using the STP position and the obstacle information obtained from the two dimensional laser scan, whose purpose is to choose the highest-traversability-score direction (HTSD) and an adaptive-safe-following distance (ASFD). Then, the HTSD, the ASFD and the current velocity of the robot are considered as inputs of the fuzzy system to adjust its velocity smoothly.
Findings
The proposed RABS is verified by a set of experiments using a real big-heavy autonomous mobile robot (BH-AMR), with the dimension 0.8 × 1.2 (m), weight 150 (kg), full-load 500 (kg), aiding smart factories. The obtained results have shown that the proposed RABS equips the BH-AMR with the ability to follow the STP smoothly and safely even when the robot is moving at the maximum speed 1.5 (m/s).
Research limitations/implications
In this paper, the autonomous mobile robot considers all environments as unknown even when it is working in mapped environments. This limitation is presented clearly in the future works section.
Practical implications
This proposed method can be used to help the autonomous mobile robot support persons in factories, hospitals, restaurants, supermarkets or at the airports.
Originality/value
This paper presents a RABS, including three new features: a fuzzy-based solution to help human-following robots maintain an appropriate distance to the STP safely and smoothly with the maximum velocity 1.5 (m/s); the proposed fuzzy-based solution, an adaptive vector field histogram and a new approach for the STP tracking is combined to follow the STP and avoid the collision simultaneously in unknown indoor and outdoor environments; the proposed RABS is considered for BH-AMRs (with the dimension 0.8 × 1.2 (m), weight 150 (kg), full-load 500 (kg)) to serve real tasks in smart factories.
Details
Keywords
Ruifeng Li and Wei Wu
In corridor environments, human-following robot encounter difficulties when the target turning around at the corridor intersections, as walls may cause complete occlusion. This…
Abstract
Purpose
In corridor environments, human-following robot encounter difficulties when the target turning around at the corridor intersections, as walls may cause complete occlusion. This paper aims to propose a collision-free following system for robot to track humans in corridors without a prior map.
Design/methodology/approach
In addition to following a target and avoiding collisions robustly, the proposed system calculates the positions of walls in the environment in real-time. This allows the system to maintain a stable tracking of the target even if it is obscured after turning. The proposed solution is integrated into a four-wheeled differential drive mobile robot to follow a target in a corridor environment in real-world.
Findings
The experimental results demonstrate that the robot equipped with the proposed system is capable of avoiding obstacles and following a human target robustly in the corridors. Moreover, the robot achieves a 90% success rate in maintaining a stable tracking of the target after the target turns around a corner with high speed.
Originality/value
This paper proposes a human target following system incorporating three novel features: a path planning method based on wall positions is introduced to ensure stable tracking of the target even when it is obscured due to target turns; improvements are made to the random sample consensus (RANSAC) algorithm, enhancing its accuracy in calculating wall positions. The system is integrated into a four-wheeled differential drive mobile robot effectively demonstrates its remarkable robustness and real-time performance.
Details
Keywords
Ping Zhang, Guanglong Du and Di Li
The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface…
Abstract
Purpose
The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface, which perfects human intelligence and teleoperation.
Design/methodology/approach
In the proposed method, an inertial measurement unit is used to measure the orientation of the human hand, and a Camshift algorithm is used to track the human hand using a three-dimensional camera. Although the location and the orientation of the human can be obtained from the two sensors, the measurement error increases over time due to the noise of the devices and the tracking errors. KFs are used to estimate the location and the orientation of the human hand. Moreover, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. An AMT method is proposed to assist the operator to improve accuracy and reliability in determining the pose of the robot.
Findings
The experimental results show that this method would not hinder most natural human-limb motion and allows the operator to concentrate on his/her own task. Compared with the non-contacting marker-less method (Kofman et al., 2007), this method proves more accurate and stable.
Originality/value
The human-robot interface system was experimentally verified in a laboratory environment, and the results indicate that such a system can complete high-precision manipulation efficiently.
Details
Keywords
This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.
Abstract
Purpose
This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.
Design/methodology/approach
Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn.
Findings
Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact.
Originality/value
This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.
Details