Books and journals Case studies Expert Briefings Open Access
Advanced search

Search results

1 – 10 of over 2000
To view the access options for this content please click here
Article
Publication date: 9 September 2014

Close range depth sensing cameras for virtual reality based hand rehabilitation

Darryl Charles, Katy Pedlow, Suzanne McDonough, Ka Shek and Therese Charles

The Leap Motion represents a new generation of depth sensing cameras designed for close range tracking of hands and fingers, operating with minimal latency and high…

HTML
PDF (480 KB)

Abstract

Purpose

The Leap Motion represents a new generation of depth sensing cameras designed for close range tracking of hands and fingers, operating with minimal latency and high spatial precision (0.01 mm). The purpose of this paper is to develop virtual reality (VR) simulations of three well-known hand-based rehabilitation tasks using a commercial game engine and utilising a Leap camera as the primary mode of interaction. The authors present results from an initial evaluation by professional clinicians of these VR simulations for use in their hand and finger physical therapy practice.

Design/methodology/approach

A cross-disciplinary team of researchers collaborated with a local software company to create three dimension interactive simulations of three hand focused rehabilitation tasks: Cotton Balls, Stacking Blocks, and the Nine Hole Peg Test. These simulations were presented to a group of eight physiotherapists and occupational therapists (n=8) based in the Regional Acquired Brain Injury Unit, Belfast Health, and Social Care Trust for evaluation. After induction, the clinicians attempted the tasks presented and provided feedback by filling out a questionnaire.

Findings

Results from questionnaires (using a Likert scale 1-7, where 1 was the most favourable response) revealed a positive response to the simulations with an overall mean score across all questions equal to 2.59. Clinicians indicated that the system contained tasks that were easy to understand (mean score 1.88), and though it took several attempts to become competent, they predicted that they would improve with practice (mean score 2.25). In general, clinicians thought the prototypes provided a good illustration of the tasks required in their practice (mean score 2.38) and that patients would likely be motivated to use the system (mean score 2.38), especially young patients (mean score 1.63), and in the home environment (mean score 2.5).

Originality/value

Cameras offer an unobtrusive and low maintenance approach to tracking user motion in VR therapy in comparison to methods based on wearable technologies. This paper presents positive results from an evaluation of the new Leap Motion camera for input control of VR simulations or games. This mode of interaction provides a low cost, easy to use, high-resolution system for tracking fingers and hands, and has great potential for home-based physical therapies, particularly for young people.

Details

Journal of Assistive Technologies, vol. 8 no. 3
Type: Research Article
DOI: https://doi.org/10.1108/JAT-02-2014-0007
ISSN: 1754-9450

Keywords

  • Rehabilitation
  • Virtual reality
  • Gestures
  • Depth sensing camera
  • Fingers and hands
  • Games

To view the access options for this content please click here
Article
Publication date: 16 October 2017

Detecting humans in the robot workspace

Robert Bogue

This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.

HTML
PDF (1.8 MB)

Abstract

Purpose

This paper aims to provide a technical insight into a selection of robotic people detection technologies and applications.

Design/methodology/approach

Following an introduction, this paper first discusses people-sensing technologies which seek to extend the capabilities of human-robot collaboration by allowing humans to operate alongside conventional, industrial robots. It then provides examples of developments in people detection and tracking in unstructured, dynamic environments. Developments in people sensing and monitoring by assistive robots are then considered and finally, brief concluding comments are drawn.

Findings

Robotic people detection technologies are the topic of an extensive research effort and are becoming increasingly important, as growing numbers of robots interact directly with humans. These are being deployed in industry, in public places and in the home. The sensing requirements vary according to the application and range from simple person detection and avoidance to human motion tracking, behaviour and safety monitoring, individual recognition and gesture sensing. Sensing technologies include cameras, lasers and ultrasonics, and low cost RGB-D cameras are having a major impact.

Originality/value

This article provides details of a range of developments involving people sensing in the important and rapidly developing field of human-robot interactions.

Details

Industrial Robot: An International Journal, vol. 44 no. 6
Type: Research Article
DOI: https://doi.org/10.1108/IR-07-2017-0132
ISSN: 0143-991X

Keywords

  • Sensors
  • Robotics
  • Safety
  • Detecting people

To view the access options for this content please click here
Article
Publication date: 27 May 2020

Low-cost 3D scanning systems for cultural heritage documentation

Quentin Kevin Gautier, Thomas G. Garrison, Ferrill Rushton, Nicholas Bouck, Eric Lo, Peter Tueller, Curt Schurgers and Ryan Kastner

Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light…

HTML
PDF (4.7 MB)

Abstract

Purpose

Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.

Design/methodology/approach

The authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.

Findings

While not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.

Practical implications

The proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.

Originality/value

This article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 10 no. 4
Type: Research Article
DOI: https://doi.org/10.1108/JCHMSD-03-2020-0032
ISSN: 2044-1266

Keywords

  • Archaeology
  • Cultural heritage
  • Documentation
  • Surveying and recording
  • Mapping

To view the access options for this content please click here
Article
Publication date: 29 March 2011

Sensors for interfacing with consumer electronics

Robert Bogue

The paper aims to describe the sensors used for interfacing with consumer electronic devices.

HTML
PDF (280 KB)

Abstract

Purpose

The paper aims to describe the sensors used for interfacing with consumer electronic devices.

Design/methodology/approach

The paper describes the types of sensors employed in user‐interface devices such as trackballs, mice, touch pads, touch screens and gesture‐based systems. It concludes with a brief consideration of brain‐computer interface technology.

Findings

It is shown that a diverse range of sensors is used to interface with consumer electronics. They are based on optical, electrical, acoustic and solid‐state (MEMS) technologies. In the longer term, many may ultimately be replaced by sensors that interpret thought by detecting brain waves.

Originality/value

The paper provides a timely review of the sensors used to interface with consumer electronics. These constitute a very large and rapidly growing market.

Details

Sensor Review, vol. 31 no. 2
Type: Research Article
DOI: https://doi.org/10.1108/02602281111109952
ISSN: 0260-2288

Keywords

  • Sensors
  • MEMS
  • Man machine interface
  • Electrical goods

Content available
Article
Publication date: 9 September 2014

Editorial

Chris Abbott

HTML

Abstract

Details

Journal of Assistive Technologies, vol. 8 no. 3
Type: Research Article
DOI: https://doi.org/10.1108/JAT-08-2014-0018
ISSN: 1754-9450

To view the access options for this content please click here
Article
Publication date: 11 July 2016

Tracking-based 3D human skeleton extraction from stereo video camera toward an on-site safety and ergonomic analysis

Meiyin Liu, SangUk Han and SangHyun Lee

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual…

HTML
PDF (1.5 MB)

Abstract

Purpose

As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.

Design/methodology/approach

This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.

Findings

Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.

Originality/value

Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).

Details

Construction Innovation, vol. 16 no. 3
Type: Research Article
DOI: https://doi.org/10.1108/CI-10-2015-0054
ISSN: 1471-4175

Keywords

  • Ergonomics
  • Construction safety
  • Stereo vision
  • Computer vision
  • 3D human skeleton extraction
  • Motion tracking

To view the access options for this content please click here
Article
Publication date: 1 March 2005

Novel visual sensing systems for overcoming occlusion in robotic assembly

J.Y. Kim

In precision robotic assembly visual sensing techniques have been widely used since they can detect large misalignments and also part's shape at a distance. Develops two…

HTML
PDF (502 KB)

Abstract

Purpose

In precision robotic assembly visual sensing techniques have been widely used since they can detect large misalignments and also part's shape at a distance. Develops two novel visual sensing methodologies.

Design/methodology/approach

Both systems consist of four components: an inside mirror and an outside mirror, a pair of plane mirrors and a camera with a collecting lens. The difference between the two is that system A adopts a pyramidal mirror configuration, while system B employs a conic one. Owing to this configuration difference, system A can detect three‐dimensional measurements of objects with only one image capture, while in addition to this functionality system B is shown to be capable of detecting two omni‐directional image. The measurement principles are described in detail and compared with each other.

Findings

The image acquiring process is shown to easily detect the in situ status of each assembly action, while the recognition method is found to be effective to identify instantaneous misalignment between the peg and the hole. The results obtained from a series of experiments show that the proposed visual sensing methods are an effective means of detecting misalignment between mating parts even in the presence of self‐occlusion.

Practical implications

The proposed sensing methods will dramatically increase the rate of success when actually utilized in assembly processes.

Originality/value

Describes the development of two novel visual sensing methodologies.

Details

Assembly Automation, vol. 25 no. 1
Type: Research Article
DOI: https://doi.org/10.1108/01445150510578978
ISSN: 0144-5154

Keywords

  • Sensors
  • Robotics
  • Assembly
  • Optical instruments

To view the access options for this content please click here
Article
Publication date: 30 January 2007

High‐resolution tactile sensor using the deformation of a reflection image

Satoshi Saga, Hiroyuki Kajimoto and Susumu Tachi

The aim of this paper is to create a sensor that can measure the contact status with high‐resolution than ever.

HTML
PDF (517 KB)

Abstract

Purpose

The aim of this paper is to create a sensor that can measure the contact status with high‐resolution than ever.

Design/methodology/approach

This paper proposes a new type of optical tactile sensor that can detect surface deformation with high precision by using the principle of optical lever. A tactile sensor is constructed that utilizes the resolution of a camera to the maximum by using transparent silicone rubber as a deformable mirror surface and taking advantage of the reflection image.

Findings

It has been found that the sensor can sense the deformation by the object with 1 percent error rate in simulation. In implementation of this time, the error rate results 10 percent.

Research limitations/implications

This sensor can be used with broad applications by combining with other devices. As one of future work, the zero method will be used by using active patterns and get more accurate information.

Practical implications

Using the transparent silicone rubbers the sensor enables very simple and low cost and high‐resolution detection method. In addition, the simplicity of our sensor results various applications. For example, the transparency makes the sensor a light pathway, so the sensor can be a contactless sensor or an interactive device.

Originality/value

The concept of a tactile sensing method is introduced which can utilize the resolution of a camera to the maximum possible extent and can detect surface deformation by using the principle of optical lever.

Details

Sensor Review, vol. 27 no. 1
Type: Research Article
DOI: https://doi.org/10.1108/02602280710723451
ISSN: 0260-2288

Keywords

  • Sensors
  • Measurement
  • Tactile sensors
  • Image sensors

To view the access options for this content please click here
Article
Publication date: 1 April 2006

Error measurement and analysis for a 3D face surface matching system

George Stockman, Jayson Payne, Jermil Sadler and Dirk Colbry

To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that…

HTML
PDF (268 KB)

Abstract

Purpose

To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that matches the sensed surface to a model surface.

Design/methodology/approach

Rigid mannikin face that was, otherwise, fairly realistic was obtained, and several sensing and matching experiments were performed. Pose position, lighting and face color were controlled.

Findings

The combined sensor‐matching system typically reported correct face surface matches with trimmed RMS error of 0.5 mm or less for a generous volume of parameters, including roll, pitch, yaw, position, lighting, and facecolor. Error accelerated beyond this “approximately frontal” set of parameters. Mannikin results are compared to results with thousands of cases of real faces. The sensor accuracy is not a limiting component of the system, but supports the application well.

Practical implications

The sensor supports the application well (except for the current cost). Equal error rates achieved appear to be practical for face verification.

Originality/value

No similar report is known for sensing faces.

Details

Sensor Review, vol. 26 no. 2
Type: Research Article
DOI: https://doi.org/10.1108/02602280610652703
ISSN: 0260-2288

Keywords

  • Biodata
  • Sensors
  • Error analysis
  • Pattern recognition

To view the access options for this content please click here
Article
Publication date: 1 September 2000

Human body 3D imaging by speckle texture projection photogrammetry

J. Paul Siebert and Stephen J. Marshall

Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D…

HTML
PDF (448 KB)

Abstract

Describes a non‐contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all‐round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo‐imaging approach are outlined, from image capture and basic 3D model construction to multi‐view capture and all‐round 3D model integration. A number of law enforcement, medical and commercial applications are described briefly including prisoner 3D face models, maxillofacial and orofacial cleft assessment, breast imaging and foot scanning. Ongoing research in real‐time capture and processing, and model construction from naturally illuminated image sources is also outlined.

Details

Sensor Review, vol. 20 no. 3
Type: Research Article
DOI: https://doi.org/10.1108/02602280010372368
ISSN: 0260-2288

Keywords

  • Vision
  • Medical applications
  • 3D image processing
  • VR

Access
Only content I have access to
Only Open Access
Year
  • Last week (6)
  • Last month (16)
  • Last 3 months (53)
  • Last 6 months (114)
  • Last 12 months (212)
  • All dates (2376)
Content type
  • Article (1845)
  • Book part (436)
  • Earlycite article (69)
  • Case study (26)
1 – 10 of over 2000
Emerald Publishing
  • Opens in new window
  • Opens in new window
  • Opens in new window
  • Opens in new window
© 2021 Emerald Publishing Limited

Services

  • Authors Opens in new window
  • Editors Opens in new window
  • Librarians Opens in new window
  • Researchers Opens in new window
  • Reviewers Opens in new window

About

  • About Emerald Opens in new window
  • Working for Emerald Opens in new window
  • Contact us Opens in new window
  • Publication sitemap

Policies and information

  • Privacy notice
  • Site policies
  • Modern Slavery Act Opens in new window
  • Chair of Trustees governance statement Opens in new window
  • COVID-19 policy Opens in new window
Manage cookies

We’re listening — tell us what you think

  • Something didn’t work…

    Report bugs here

  • All feedback is valuable

    Please share your general feedback

  • Member of Emerald Engage?

    You can join in the discussion by joining the community or logging in here.
    You can also find out more about Emerald Engage.

Join us on our journey

  • Platform update page

    Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

  • Questions & More Information

    Answers to the most commonly asked questions here