Search results

1 – 10 of over 6000
Article
Publication date: 20 October 2014

Haitao Yang, Minghe Jin, Zongwu Xie, Kui Sun and Hong Liu

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in…

Abstract

Purpose

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in 3-dimensional space prior to space robot being launched.

Design/methodology/approach

To implement the approaching and capturing task, a motion planning method for visual servoing the space manipulator to capture a moving target is presented. This is mainly used to solve the time-delay problem of the visual servoing control system and the motion uncertainty of the target satellite. To verify and test the feasibility and reliability of the method in three-dimensional (3D) operating space, a set of ground hardware-in-the-loop simulation verification systems is developed, which adopts the end-tip kinematics equivalence and dynamics simulation method.

Findings

The results of the ground hardware-in-the-loop simulation experiment validate the reliability of the eye-in-hand visual system in the 3D operating space and prove the validity of the visual servoing motion planning method with time-delay compensation. At the same time, owing to the dynamics simulator of the space robot added in the ground hardware-in-the-loop verification system, the base disturbance can be considered during the approaching and capturing procedure, which makes the ground verification system realistic and credible.

Originality/value

The ground verification experiment system includes the real controller of space manipulator, the eye-in-hand camera and the dynamics simulator, which can veritably simulate the capturing process based on the visual servoing in space and consider the effect of time delay and the free-floating base disturbance.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 March 2011

Matthew Field, Zengxi Pan, David Stirling and Fazel Naghdy

The purpose of this paper is to provide a review of various motion capture technologies and discuss the methods for handling the captured data in applications related to robotics.

1595

Abstract

Purpose

The purpose of this paper is to provide a review of various motion capture technologies and discuss the methods for handling the captured data in applications related to robotics.

Design/methodology/approach

The approach taken in the paper is to compare the features and limitations of motion trackers in common use. After introducing the technology, a summary is given of robotic‐related work undertaken with the sensors and the strengths of different approaches in handling the data are discussed. Each comparison is presented in a table. Results from the author's experimentation with an inertial motion capture system are discussed based on clustering and segmentation techniques.

Findings

The trend in methodology is towards stochastic machine learning techniques such as hidden Markov model or Gaussian mixture model, their extensions in hierarchical forms and non‐linear dimension reduction. The resulting empirical models tend to handle uncertainty well and are suitable for incrementally updating models. The challenges in human‐robot interaction today include expanding upon generalising motions to understand motion planning and decisions and build ultimately context aware systems.

Originality/value

Reviews including descriptions of motion trackers and recent methodologies used in analyzing the data they capture are not very common. Some exist, as has been pointed out in the paper, but this review concentrates more on applications in the robotics field. There is value in regularly surveying the research areas considered in this paper due to the rapid progress in sensors and especially data modeling.

Details

Industrial Robot: An International Journal, vol. 38 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 March 2017

Bin Fang, Fuchun Sun, Huaping Liu and Di Guo

The purpose of this paper is to present a novel data glove which can capture the motion of the arm and hand by inertial and magnetic sensors. The proposed data glove is used to…

Abstract

Purpose

The purpose of this paper is to present a novel data glove which can capture the motion of the arm and hand by inertial and magnetic sensors. The proposed data glove is used to provide the information of the gestures and teleoperate the robotic arm-hand.

Design/methodology/approach

The data glove comprises 18 low-cost inertial and magnetic measurement units (IMMUs) which not only make up the drawbacks of traditional data glove that only captures the incomplete gesture information but also provide a novel scheme of the robotic arm-hand teleoperation. The IMMUs are compact and small enough to wear on the upper arm, forearm, palm and fingers. The calibration method is proposed to improve the accuracy of measurements of units, and the orientations of each IMMU are estimated by a two-step optimal filter. The kinematic models of the arm, hand and fingers are integrated into the entire system to capture the motion gesture. A positon algorithm is also deduced to compute the positions of fingertips. With the proposed data glove, the robotic arm-hand can be teleoperated by the human arm, palm and fingers, thus establishing a novel robotic arm-hand teleoperation scheme.

Findings

Experimental results show that the proposed data glove can accurately and fully capture the fine gesture. Using the proposed data glove as the multiple input device has also proved to be a suitable teleoperating robotic arm-hand system.

Originality/value

Integrated with 18 low-cost and miniature IMMUs, the proposed data glove can give more information of the gesture than existing devices. Meanwhile, the proposed algorithms for motion capture determine the superior results. Furthermore, the accurately captured gestures can efficiently facilitate a novel teleoperation scheme to teleoperate the robotic arm-hand.

Details

Industrial Robot: An International Journal, vol. 44 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 July 2016

Meiyin Liu, SangUk Han and SangHyun Lee

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities…

1194

Abstract

Purpose

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.

Design/methodology/approach

This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.

Findings

Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.

Originality/value

Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).

Details

Construction Innovation, vol. 16 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 11 June 2019

Muhammad Yahya, Jawad Ali Shah, Kushsairy Abdul Kadir, Zulkhairi M. Yusof, Sheroz Khan and Arif Warsi

Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water…

1483

Abstract

Purpose

Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water navigation systems, sea-water exploration pursuits, human machine interaction and learning software to help teachers of sign language. The purpose of this paper is to help the researchers to select specific MoCap system for various applications and the development of new algorithms related to upper limb motion.

Design/methodology/approach

This paper provides an overview of different sensors used in MoCap and techniques used for estimating human upper limb motion.

Findings

The existing MoCaps suffer from several issues depending on the type of MoCap used. These issues include drifting and placement of Inertial sensors, occlusion and jitters in Kinect, noise in electromyography signals and the requirement of a well-structured, calibrated environment and time-consuming task of placing markers in multiple camera systems.

Originality/value

This paper outlines the issues and challenges in MoCaps for measuring human upper limb motion and provides an overview on the techniques to overcome these issues and challenges.

Details

Sensor Review, vol. 39 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 22 July 2019

Wenbin Xu, Xudong Li, Liang Gong, Yixiang Huang, Zeyuan Zheng, Zelin Zhao, Lujie Zhao, Binhao Chen, Haozhe Yang, Li Cao and Chengliang Liu

This paper aims to present a human-in-the-loop natural teaching paradigm based on scene-motion cross-modal perception, which facilitates the manipulation intelligence and robot…

1484

Abstract

Purpose

This paper aims to present a human-in-the-loop natural teaching paradigm based on scene-motion cross-modal perception, which facilitates the manipulation intelligence and robot teleoperation.

Design/methodology/approach

The proposed natural teaching paradigm is used to telemanipulate a life-size humanoid robot in response to a complicated working scenario. First, a vision sensor is used to project mission scenes onto virtual reality glasses for human-in-the-loop reactions. Second, motion capture system is established to retarget eye-body synergic movements to a skeletal model. Third, real-time data transfer is realized through publish-subscribe messaging mechanism in robot operating system. Next, joint angles are computed through a fast mapping algorithm and sent to a slave controller through a serial port. Finally, visualization terminals render it convenient to make comparisons between two motion systems.

Findings

Experimentation in various industrial mission scenes, such as approaching flanges, shows the numerous advantages brought by natural teaching, including being real-time, high accuracy, repeatability and dexterity.

Originality/value

The proposed paradigm realizes the natural cross-modal combination of perception information and enhances the working capacity and flexibility of industrial robots, paving a new way for effective robot teaching and autonomous learning.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 February 2013

Myagmarbayar Nergui, Yuki Yoshida, Nevrez Imamoglu, Jose Gonzalez, Masashi Sekine and Wenwei Yu

The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on…

1723

Abstract

Purpose

The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on observation data, and providing automatically calling for medical personnel in emergency situations. The robots to be developed will bring about cost‐effective, safe and easier at‐home rehabilitation to most motor‐function impaired patients (MIPs).

Design/methodology/approach

The paper has developed following programs/control algorithms: control algorithms for a mobile robot to track and follow human motions, to measure human joint trajectories, and to calculate angles of lower limb joints; and algorithms for recognizing human gait behaviours based on the calculated joints angle data.

Findings

A Hidden Markov Model (HMM) based human gait behaviour recognition taking lower limb joint angles and body angle as input was proposed. The proposed HMM based gait behaviour recognition is compared with the Nearest Neighbour (NN) classification methods. Experimental results showed that a human gait behaviour recognition using HMM can be achieved from the lower limb joint trajectory with higher accuracy than compared classification methods.

Originality/value

The research addresses human motion tracking and recognition by a mobile robot. Human gait behaviour recognition is HMM based lower limb joints and body angle data from extracted from kinect sensor at the mobile robot.

Details

International Journal of Intelligent Unmanned Systems, vol. 1 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 29 March 2022

Edwin Chng, Mohamed Raouf Seyam, William Yao and Bertrand Schneider

This study aims to uncover divergent collaboration in makerspaces using social network analysis to examine ongoing social relations and sequential data pattern mining to…

Abstract

Purpose

This study aims to uncover divergent collaboration in makerspaces using social network analysis to examine ongoing social relations and sequential data pattern mining to invesitgate temporal changes in social activities.

Design/methodology/approach

While there is a significant body of qualitative work on makerspaces, there is a lack of quantitative research identifying productive interactions in open-ended learning environments. This study explores the use of high frequency sensor data to capture divergent collaboration in a semester-long makerspace course, where students support each other while working on different projects.

Findings

The main finding indicates that students who diversely mix with others performed better in a semester-long course. Additional results suggest that having a certain balance of working individually, collaborating with other students and interacting with instructors maximizes performance, provided that sufficient alone time is committed to develop individual technical skills.

Research limitations/implications

These discoveries provide insight into how productive makerspace collaboration can occur within the framework of Divergent Collaboration Learning Mechanisms (Tissenbaum et al., 2017).

Practical implications

Identifying the diversity and sequence of social interactions could also increase instructor awareness of struggling students and having this data in real-time opens new doors for identifying (un)productive behaviors.

Originality/value

The contribution of this study is to explore the use of a sensor-based, data-driven, longitudinal approach in an ecologically valid setting to understand divergent collaboration in makerspaces. Finally, this study discusses how this work represents an initial step toward quantifying and supporting productive interactions in project-based learning environments.

Details

Information and Learning Sciences, vol. 123 no. 5/6
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 14 June 2013

Jie Liu

The purpose of this paper is to develop a robotic tooth brushing simulator mimicking realistic tooth brushing motions, thereby facilitating greater understanding of the generation…

Abstract

Purpose

The purpose of this paper is to develop a robotic tooth brushing simulator mimicking realistic tooth brushing motions, thereby facilitating greater understanding of the generation of realistic tooth brushing motion for optimal design of toothbrushes.

Design/methodology/approach

Tooth brushing motions were measured via a motion capture system. Different motion patterns of brushing were analysed. A series of elliptical motion segments were generated by interpolating ellipse‐like trajectories. Furthermore, a path generation algorithm for brushing simulation was proposed. A path planning system incorporating robot motion control was developed to simulate realistic tooth brushing. The generality and efficiency of the proposed algorithm was demonstrated through simulation and experimental results.

Findings

The interpolation of ellipse‐like trajectories can generate elliptical motion segments. Furthermore, realistic tooth brushing can be achieved by integrating the elliptical motion segments into the path generated from the surfaces of teeth. The brushing simulator demonstrated good reproducibility of clinically standardized tooth brushing.

Practical implications

A robotic toothbrush assessment system is a potential application to the robotic tooth brushing simulator by incorporating control of brushing variables, including brushing pressure, speed and temperature.

Originality/value

This study demonstrates the feasibility of using robotic simulation techniques towards improved realistic human tooth brushing motions simulation for optimal design of tooth brushes.

Details

Industrial Robot: An International Journal, vol. 40 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 June 2009

Atsushi Shimada, Madoka Kanouchi, Daisaku Arita and Rin‐Ichiro Taniguchi

The purpose of this paper is to present an approach to improve the accuracy of estimating feature points of human body on a vision‐based motion capture system (MCS) by using the…

Abstract

Purpose

The purpose of this paper is to present an approach to improve the accuracy of estimating feature points of human body on a vision‐based motion capture system (MCS) by using the variable‐density self‐organizing map (VDSOM).

Design/methodology/approach

The VDSOM is a kind of self‐organizing map (SOM) and has an ability to learn training samples incrementally. The authors let VDSOM learn 3D feature points of human body when the MCS succeeded in estimating them correctly. On the other hand, one or more 3D feature point could not be estimated correctly, the VDSOM is used for the other purpose. The SOM including VDSOM has an ability to recall a part of weight vector which have learned in the learning process. This ability is used to recall correct patterns and complement such incorrect feature points by replacing such incorrect feature points with them.

Findings

Experimental results show that the approach is effective for estimation of human posture robustly compared with the other methods.

Originality/value

The proposed approach is interesting for the collaboration between an MCS and an incremental learning.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 6000