Search results
1 – 10 of over 2000Darryl Charles, Katy Pedlow, Suzanne McDonough, Ka Shek and Therese Charles
The Leap Motion represents a new generation of depth sensing cameras designed for close range tracking of hands and fingers, operating with minimal latency and high…
Abstract
Purpose
The Leap Motion represents a new generation of depth sensing cameras designed for close range tracking of hands and fingers, operating with minimal latency and high spatial precision (0.01 mm). The purpose of this paper is to develop virtual reality (VR) simulations of three well-known hand-based rehabilitation tasks using a commercial game engine and utilising a Leap camera as the primary mode of interaction. The authors present results from an initial evaluation by professional clinicians of these VR simulations for use in their hand and finger physical therapy practice.
Design/methodology/approach
A cross-disciplinary team of researchers collaborated with a local software company to create three dimension interactive simulations of three hand focused rehabilitation tasks: Cotton Balls, Stacking Blocks, and the Nine Hole Peg Test. These simulations were presented to a group of eight physiotherapists and occupational therapists (n=8) based in the Regional Acquired Brain Injury Unit, Belfast Health, and Social Care Trust for evaluation. After induction, the clinicians attempted the tasks presented and provided feedback by filling out a questionnaire.
Findings
Results from questionnaires (using a Likert scale 1-7, where 1 was the most favourable response) revealed a positive response to the simulations with an overall mean score across all questions equal to 2.59. Clinicians indicated that the system contained tasks that were easy to understand (mean score 1.88), and though it took several attempts to become competent, they predicted that they would improve with practice (mean score 2.25). In general, clinicians thought the prototypes provided a good illustration of the tasks required in their practice (mean score 2.38) and that patients would likely be motivated to use the system (mean score 2.38), especially young patients (mean score 1.63), and in the home environment (mean score 2.5).
Originality/value
Cameras offer an unobtrusive and low maintenance approach to tracking user motion in VR therapy in comparison to methods based on wearable technologies. This paper presents positive results from an evaluation of the new Leap Motion camera for input control of VR simulations or games. This mode of interaction provides a low cost, easy to use, high-resolution system for tracking fingers and hands, and has great potential for home-based physical therapies, particularly for young people.
Details
Keywords
This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.
Abstract
Purpose
This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.
Design/methodology/approach
Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn.
Findings
Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact.
Originality/value
This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.
Details
Keywords
Quentin Kevin Gautier, Thomas G. Garrison, Ferrill Rushton, Nicholas Bouck, Eric Lo, Peter Tueller, Curt Schurgers and Ryan Kastner
Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light…
Abstract
Purpose
Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.
Design/methodology/approach
The authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.
Findings
While not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.
Practical implications
The proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.
Originality/value
This article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.
Details
Keywords
The paper aims to describe the sensors used for interfacing with consumer electronic devices.
Abstract
Purpose
The paper aims to describe the sensors used for interfacing with consumer electronic devices.
Design/methodology/approach
The paper describes the types of sensors employed in user‐interface devices such as trackballs, mice, touch pads, touch screens and gesture‐based systems. It concludes with a brief consideration of brain‐computer interface technology.
Findings
It is shown that a diverse range of sensors is used to interface with consumer electronics. They are based on optical, electrical, acoustic and solid‐state (MEMS) technologies. In the longer term, many may ultimately be replaced by sensors that interpret thought by detecting brain waves.
Originality/value
The paper provides a timely review of the sensors used to interface with consumer electronics. These constitute a very large and rapidly growing market.
Details
Keywords
Meiyin Liu, SangUk Han and SangHyun Lee
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual…
Abstract
Purpose
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.
Design/methodology/approach
This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.
Findings
Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.
Originality/value
Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).
Details
Keywords
In precision robotic assembly visual sensing techniques have been widely used since they can detect large misalignments and also part's shape at a distance. Develops two…
Abstract
Purpose
In precision robotic assembly visual sensing techniques have been widely used since they can detect large misalignments and also part's shape at a distance. Develops two novel visual sensing methodologies.
Design/methodology/approach
Both systems consist of four components: an inside mirror and an outside mirror, a pair of plane mirrors and a camera with a collecting lens. The difference between the two is that system A adopts a pyramidal mirror configuration, while system B employs a conic one. Owing to this configuration difference, system A can detect three‐dimensional measurements of objects with only one image capture, while in addition to this functionality system B is shown to be capable of detecting two omni‐directional image. The measurement principles are described in detail and compared with each other.
Findings
The image acquiring process is shown to easily detect the in situ status of each assembly action, while the recognition method is found to be effective to identify instantaneous misalignment between the peg and the hole. The results obtained from a series of experiments show that the proposed visual sensing methods are an effective means of detecting misalignment between mating parts even in the presence of self‐occlusion.
Practical implications
The proposed sensing methods will dramatically increase the rate of success when actually utilized in assembly processes.
Originality/value
Describes the development of two novel visual sensing methodologies.
Details
Keywords
Satoshi Saga, Hiroyuki Kajimoto and Susumu Tachi
The aim of this paper is to create a sensor that can measure the contact status with high‐resolution than ever.
Abstract
Purpose
The aim of this paper is to create a sensor that can measure the contact status with high‐resolution than ever.
Design/methodology/approach
This paper proposes a new type of optical tactile sensor that can detect surface deformation with high precision by using the principle of optical lever. A tactile sensor is constructed that utilizes the resolution of a camera to the maximum by using transparent silicone rubber as a deformable mirror surface and taking advantage of the reflection image.
Findings
It has been found that the sensor can sense the deformation by the object with 1 percent error rate in simulation. In implementation of this time, the error rate results 10 percent.
Research limitations/implications
This sensor can be used with broad applications by combining with other devices. As one of future work, the zero method will be used by using active patterns and get more accurate information.
Practical implications
Using the transparent silicone rubbers the sensor enables very simple and low cost and high‐resolution detection method. In addition, the simplicity of our sensor results various applications. For example, the transparency makes the sensor a light pathway, so the sensor can be a contactless sensor or an interactive device.
Originality/value
The concept of a tactile sensing method is introduced which can utilize the resolution of a camera to the maximum possible extent and can detect surface deformation by using the principle of optical lever.
Details
Keywords
George Stockman, Jayson Payne, Jermil Sadler and Dirk Colbry
To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that…
Abstract
Purpose
To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that matches the sensed surface to a model surface.
Design/methodology/approach
Rigid mannikin face that was, otherwise, fairly realistic was obtained, and several sensing and matching experiments were performed. Pose position, lighting and face color were controlled.
Findings
The combined sensor‐matching system typically reported correct face surface matches with trimmed RMS error of 0.5 mm or less for a generous volume of parameters, including roll, pitch, yaw, position, lighting, and facecolor. Error accelerated beyond this “approximately frontal” set of parameters. Mannikin results are compared to results with thousands of cases of real faces. The sensor accuracy is not a limiting component of the system, but supports the application well.
Practical implications
The sensor supports the application well (except for the current cost). Equal error rates achieved appear to be practical for face verification.
Originality/value
No similar report is known for sensing faces.
Details
Keywords
J. Paul Siebert and Stephen J. Marshall
Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D…
Abstract
Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo‐imaging approach are outlined, from image capture and basic 3D model construction to multi‐view capture and all‐round 3D model integration. A number of law enforcement, medical and commercial applications are described briefly including prisoner 3D face models, maxillofacial and orofacial cleft assessment, breast imaging and foot scanning. Ongoing research in real‐time capture and processing, and model construction from naturally illuminated image sources is also outlined.
Details