Search results
1 – 10 of over 2000Miyeon Lee, Dong Il Yoo and Sungmin Kim
The purpose of this paper is to develop a relatively inexpensive and easily movable three-dimensional (3D) body scanner.
Abstract
Purpose
The purpose of this paper is to develop a relatively inexpensive and easily movable three-dimensional (3D) body scanner.
Design/methodology/approach
Multiple depth perception cameras and a turntable were used to form the hardware and a client-server computer network was used to control the hardware.
Findings
A portable and inexpensive yet quite accurate body scanner system has been developed.
Research limitations/implications
The turntable mechanism and semi-automatic model alignment caused some error.
Practical implications
This scanner is expected to facilitate the acquisition of 3D human body or garment data easily for various research projects.
Social implications
Many researchers might have an easy access to 3D data of large object such as body or whole garment.
Originality/value
Inexpensive yet expandable scanning system has been developed using readily available components.
Details
Keywords
A robotics team at NASA’s Johnson Space Center in Houston, Texas, under the direction of Dr Robert Ambrose, is developing a new breed of space robots called Robonaut. Robonaut…
Abstract
A robotics team at NASA’s Johnson Space Center in Houston, Texas, under the direction of Dr Robert Ambrose, is developing a new breed of space robots called Robonaut. Robonaut, designed to be as human‐like as possible, will be controlled by telepresence and will work in extravehicular activity (EVA) environments, allowing astronauts to remain safely inside the spacecraft.
Increasing demand on rail transport speeds up the introduction of new technical systems to optimize the rail traffic and increase competitiveness. Remote control of trains is seen…
Abstract
Purpose
Increasing demand on rail transport speeds up the introduction of new technical systems to optimize the rail traffic and increase competitiveness. Remote control of trains is seen as a potential layer of resilience in railway operations. It allows for operating and controlling automated trains and communicating and coordinating with other stakeholders of the railway system. This paper aims to present the first results of a multi-phased simulator study on the development and optimization of remote train driving concepts from the operators’ point of view.
Design/methodology/approach
The presented concept was developed by benchmarking good practices. Two phases of iterative user tests were conducted to evaluate the user experience and preferences of the developed human-machine-interface concept. Basic training requirements were identified and evaluated.
Findings
Results indicate positive feedback on the overall system as a fallback solution. HMI elicited positive emotions regarding pleasure and dominance, but low arousal levels. Train drivers had more conservative views on the system compared to signalers and students. The training activities achieved increased awareness and understanding of the system for future operators. Inclusion of potential users in the development of future systems has the potential to improve user acceptance. The iterative user experiments were useful in obtaining some of the needs and preferences of different user groups.
Originality/value
Multi-phase user tests were conducted to identify and to evaluate the requirements and preferences of remote operators using a simplified HMI. Training analysis provides important aspects to consider for the training of future users.
Details
Keywords
Aleksei Moskvin, Mariia Moskvina and Victor Kuzmichev
Digital technologies are widely used for digitization of museum and archival heritage and creation of digital, multimedia and online exhibitions, especially in terms of costume…
Abstract
Purpose
Digital technologies are widely used for digitization of museum and archival heritage and creation of digital, multimedia and online exhibitions, especially in terms of costume history. Digital exhibitions require historical dress forms which were used in the past for costume presentation. The purpose of this paper is to develop a new method for parametric modeling of the nineteenth century dress forms in accordance with fashionable body shape.
Design/methodology/approach
Due to limited number of body measurements in historical sizing tables, it is impossible to redesign the morphology of old fashionable body with high accuracy by means of contemporary CAD. The developed method is based on two sources of information: first, historical sizing tables with body measurements; second, historical corsets. By combining both resources and applying virtual try-on technology, the full anthropometric database about the nineteenth century fashionable body shape has been organized and the parametric model of historical dress form has been generated.
Findings
The digital replica of deformable parametric dress form was created automatically in accordance with the historical sizing systems and the corsets construction. The process of reproduction of a historical dress form has been done with high accuracy due to substantial advantages of contemporary software.
Originality/value
This study shows new way of anthropometric data generating from the construction of close-fitting and compression undergarments. The developed method and the new database can be applied for each type of dress forms which were used in the second part of the nineteenth century to generate its digital replica in virtual reality. The new approach is joining the digital technologies and the professional knowledge as an important part of cultural heritage for studying, recreating and presenting historical costume.
Details
Keywords
This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure…
Abstract
Purpose
This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure information, which limit their applications to indoor scenarios. By only using monocular camera, some state of art method provides up-to-scale 3D structure information, but scale information of corresponding objects is uncertain.
Design/methodology/approach
First, high-accuracy and scale-informed camera pose and sparse 3D map are provided by leveraging ORB-SLAM and marker. Second, for each frame captured by a camera, a specially designed depth estimation pipeline is used to compute corresponding 3D structure called depth map in real-time. Finally, depth map is integrated into volumetric scene model. A feedback module has been designed for users to visualize intermediate scene surface in real-time.
Findings
The system provides more robust tracking performance and compelling results. The implementation runs near 25 Hz on mainstream laptop based on parallel computation technique.
Originality/value
A new solution for 3D perception is using monocular camera by leveraging ORB-SLAM systems. Results in our system are visually comparable to active sensor systems such as elastic fusion in small scenes. The system is also both efficient and easy to implement, and algorithms and specific configurations involved are introduced in detail.
Details
Keywords
William V. Pelfrey Jr and Steven Keener
The importance of body-worn cameras (BWC) in policing cannot be overstated. This is not a hyperbolic statement – use of force incidents in Ferguson and Baltimore, the ensuing…
Abstract
Purpose
The importance of body-worn cameras (BWC) in policing cannot be overstated. This is not a hyperbolic statement – use of force incidents in Ferguson and Baltimore, the ensuing riots, coupled with critical long term implications for police community relations demonstrate the need for BWC data. Few studies have been published on the use of BWCs and little is known about officer perceptions, administrator decision making, and agency use of BWC data. No published studies incorporate qualitative data, which lends important context and depth, in the interpretation of officer survey data. The paper aims to discuss these issues.
Design/methodology/approach
The current study presents a mixed-method study of a large university police agency prior to full implementation of BWC. A survey of patrol officers and supervisors, using a census approach with near full participation, coupled with focus group interviews, produced data on perceptions, concerns, and expectations of full BWC implementation.
Findings
Findings point to officer concerns regarding the utilization of BWC data and administrative expectations regarding complaint reduction and officer assessment.
Originality/value
Important implications regarding training and policy are presented. BWC data represent an important tool for agency decision makers but have numerous potential negative uses. Understanding officer concerns juxtaposed with administrator expectations, through both survey and qualitative data, advance the knowledge on BWC.
Details
Keywords
Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu
To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…
Abstract
Purpose
To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.
Design/methodology/approach
Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.
Findings
The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.
Originality/value
A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.
Details
Keywords
Bartłomiej Kulecki, Kamil Młodzikowski, Rafał Staszak and Dominik Belter
The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method…
Abstract
Purpose
The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method of integrating convolutional neural network (CNN)-based object detection and the category-free grasping method. The considered scenario is related to mobile manipulating platforms that move freely between workstations and manipulate defined objects. In this application, the robot is not positioned with respect to the table and manipulated objects. The robot detects objects in the environment and uses grasping methods to determine the reference pose of the gripper.
Design/methodology/approach
The authors implemented the whole pipeline which includes object detection, grasp planning and motion execution on the real robot. The selected grasping method uses raw depth images to find the configuration of the gripper. The authors compared the proposed approach with a representative grasping method that uses a 3D point cloud as an input to determine the grasp for the robotic arm equipped with a two-fingered gripper. To measure and compare the efficiency of these methods, the authors measured the success rate in various scenarios. Additionally, they evaluated the accuracy of object detection and pose estimation modules.
Findings
The performed experiments revealed that the CNN-based object detection and the category-free grasping methods can be integrated to obtain the system which allows grasping defined objects in the unstructured environment. The authors also identified the specific limitations of neural-based and point cloud-based methods. They show how the determined properties influence the performance of the whole system.
Research limitations/implications
The authors identified the limitations of the proposed methods and the improvements are envisioned as part of future research.
Practical implications
The evaluation of the grasping and object detection methods on the mobile manipulating robot may be useful for all researchers working on the autonomy of similar platforms in various applications.
Social implications
The proposed method increases the autonomy of robots in applications in the small industry which is related to repetitive tasks in a noisy and potentially risky environment. This allows reducing the human workload in these types of environments.
Originality/value
The main contribution of this research is the integration of the state-of-the-art methods for grasping objects with object detection methods and evaluation of the whole system on the industrial robot. Moreover, the properties of each subsystem are identified and measured.
Details
Keywords
Laura Duarte, Mohammad Safeea and Pedro Neto
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no…
Abstract
Purpose
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications.
Design/methodology/approach
Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception.
Findings
Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured.
Originality/value
Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).
Details
Keywords
To explore the phenomenon of stereoscopic vision and its exploitation in engineering and other professional applications, and in entertainment.
Abstract
Purpose
To explore the phenomenon of stereoscopic vision and its exploitation in engineering and other professional applications, and in entertainment.
Design/methodology/approach
Starts with a review of how stereo vision works, and the techniques used in 3D movies to present the illusion of depth and movement at right angles to the screen. Looks at some engineering products that build on these techniques, and then at the development of 3D television, based on a different image separation method. Finally looks at developments in stereo machine vision.
Findings
A variety of techniques exist to present left and right views of a scene to the correct eyes and stimulate 3D perception: for example, light‐filtering, alternate‐frame sequencing and optical separation. Fatigue occurs when there is crosstalk between those images, or when the images are presented at too low a frame rate. Many computer modelling software providers produce programs with 3D‐viewing capability for professional engineers. There are some exciting recent developments, such as add‐on PC stereo systems, and 3D TV.
Originality/value
Makes the general scientist aware of the wide range of professional uses of stereo vision, and of the engineering challenges behind 3D film and television.
Details