Search results
1 – 10 of over 23000Akif Hacinecipoglu, Erhan Ilhan Konukseven and Ahmet Bugra Koku
This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color…
Abstract
Purpose
This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color information. The developed method is expected to run on computers with low computational resources so that it can be deployed on autonomous mobile robots.
Design/methodology/approach
The method is designed to have a people detection pipeline with a series of operations. Efficient point cloud processing steps with a novel head extraction operation provide possible head clusters in the scene. Classification of these clusters using support vector machines results in high speed and robust people detector.
Findings
The method is implemented on an autonomous mobile robot and results show that it can detect people with a frame rate of 28 Hz and equal error rate of 92 per cent. Also, in various non-standard poses, the detector is still able to classify people effectively.
Research limitations/implications
The main limitation would be for point clouds similar to head shape causing false positives and disruptive accessories (like large hats) causing false negatives. Still, these can be overcome with sufficient training samples.
Practical implications
The method can be used in industrial and social mobile applications because of its robustness, low resource needs and low power consumption.
Originality/value
The paper introduces a novel and efficient technique to detect people in arbitrary poses, with poor light conditions and low computational resources. Solving all these problems in a single and lightweight method makes the study fulfill an important need for collaborative and autonomous mobile robots.
Details
Keywords
Charlie D. Frowd, David White, Richard I. Kemp, Rob Jenkins, Kamran Nawaz and Kate Herold
Research suggests that memory for unfamiliar faces is pictorial in nature, with recognition negatively affected by changes to image-specific information such as head pose…
Abstract
Purpose
Research suggests that memory for unfamiliar faces is pictorial in nature, with recognition negatively affected by changes to image-specific information such as head pose, lighting and facial expression. Further, within-person variation causes some images to resemble a subject more than others. Here, the purpose of this paper is to explore the impact of target-image choice on face construction using a modern evolving type of composite system, EvoFIT.
Design/methodology/approach
Participants saw an unfamiliar target identity and then created a single composite of it the following day with EvoFIT by repeatedly selecting from arrays of faces with “breeding”, to “evolve” a face. Targets were images that had been previously categorised as low, medium or high likeness, or a face prototype comprising averaged photographs of the same individual.
Findings
Identification of composites of low likeness targets was inferior but increased as a significant linear trend from low to medium to high likeness. Also, identification scores decreased when targets changed by pose and expression, but not by lighting. Similarly, composite identification from prototypes was more accurate than those from low likeness targets, providing some support that image averages generally produce more robust memory traces.
Practical implications
The results emphasise the potential importance of matching a target's pose and expression at face construction; also, for obtaining image-specific information for construction of facial-composite images, a result that would appear to be useful to developers and researchers of composite software.
Originality/value
This current project is the first of its kind to formally explore the potential impact of pictorial properties of a target face on identifiability of faces created from memory. The design followed forensic practices as far as is practicable, to allow good generalisation of results.
Details
Keywords
Sharanabasappa and Suvarna Nandyal
In order to prevent accidents during driving, driver drowsiness detection systems have become a hot topic for researchers. There are various types of features that can be used to…
Abstract
Purpose
In order to prevent accidents during driving, driver drowsiness detection systems have become a hot topic for researchers. There are various types of features that can be used to detect drowsiness. Detection can be done by utilizing behavioral data, physiological measurements and vehicle-based data. The existing deep convolutional neural network (CNN) models-based ensemble approach analyzed the behavioral data comprises eye or face or head movement captured by using a camera images or videos. However, the developed model suffered from the limitation of high computational cost because of the application of approximately 140 million parameters.
Design/methodology/approach
The proposed model uses significant feature parameters from the feature extraction process such as ReliefF, Infinite, Correlation, Term Variance are used for feature selection. The features that are selected are undergone for classification using ensemble classifier.
Findings
The output of these models is classified into non-drowsiness or drowsiness categories.
Research limitations/implications
In this research work higher end camera are required to collect videos as it is cost-effective. Therefore, researches are encouraged to use the existing datasets.
Practical implications
This paper overcomes the earlier approach. The developed model used complex deep learning models on small dataset which would also extract additional features, thereby provided a more satisfying result.
Originality/value
Drowsiness can be detected at the earliest using ensemble model which restricts the number of accidents.
Details
Keywords
A robotics team at NASA’s Johnson Space Center in Houston, Texas, under the direction of Dr Robert Ambrose, is developing a new breed of space robots called Robonaut. Robonaut…
Abstract
A robotics team at NASA’s Johnson Space Center in Houston, Texas, under the direction of Dr Robert Ambrose, is developing a new breed of space robots called Robonaut. Robonaut, designed to be as human‐like as possible, will be controlled by telepresence and will work in extravehicular activity (EVA) environments, allowing astronauts to remain safely inside the spacecraft.
Wenhao Zhang, Melvyn Lionel Smith, Lyndon Neal Smith and Abdul Rehman Farooq
This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of…
Abstract
Purpose
This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications.
Design/methodology/approach
A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel selective oriented gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy.
Findings
The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and six other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows.
Originality/value
The eye centre localisation method uses two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world human-computer interaction (HCI) applications.
Details
Keywords
This research aims to examine whether the facial appearances and expressions of Airbnb host photos influence guest star ratings.
Abstract
Purpose
This research aims to examine whether the facial appearances and expressions of Airbnb host photos influence guest star ratings.
Design/methodology/approach
This research analyzed the profile photos of over 20,000 Airbnb hosts and the guest star ratings of over 30,000 Airbnb listings in New York City, using machine learning techniques.
Findings
First, hosts who provided profile photos received higher guest ratings than those who did not provide photos. When facial features of profile photos were recognizable, guest ratings were higher than when they were not recognizable (e.g. faces too small, faces looking backward or faces blocked by some objects). Second, a happy facial expression, blond hair and brown hair positively affected guest ratings, whereas heads tilted back negatively affected guest ratings.
Originality/value
This research is the first, to the best of the authors’ knowledge, to analyze the facial appearances and expressions of profile photos using machine learning techniques and examine the influence of Airbnb host photos on guest star ratings.
Details
Keywords
Kwok Tai Chui, Wadee Alhalabi and Ryan Wen Liu
Concentration is the key to safer driving. Ideally, drivers should focus mainly on front views and side mirrors. Typical distractions are eating, drinking, cell phone use, using…
Abstract
Purpose
Concentration is the key to safer driving. Ideally, drivers should focus mainly on front views and side mirrors. Typical distractions are eating, drinking, cell phone use, using and searching things in car as well as looking at something outside the car. In this paper, distracted driving detection algorithm is targeting on nine scenarios nodding, head shaking, moving the head 45° to upper left and back to position, moving the head 45° to lower left and back to position, moving the head 45° to upper right and back to position, moving the head 45° to lower right and back to position, moving the head upward and back to position, head dropping down and blinking as fundamental elements for distracted events. The purpose of this paper is preliminary study these scenarios for the ideal distraction detection, the exact type of distraction.
Design/methodology/approach
The system consists of distraction detection module that processes video stream and compute motion coefficient to reinforce identification of distraction conditions of drivers. Motion coefficient of the video frames is computed which follows by the spike detection via statistical filtering.
Findings
The accuracy of head motion analyzer is given as 98.6 percent. With such satisfactory result, it is concluded that the distraction detection using light computation power algorithm is an appropriate direction and further work could be devoted on more scenarios as well as background light intensity and resolution of video frames.
Originality/value
The system aimed at detecting the distraction of the public transport driver. By providing instant response and timely warning, it can lower the road traffic accidents and casualties due to poor physical conditions. A low latency and lightweight head motion detector has been developed for online driver awareness monitoring.
Details
Keywords
George Stockman, Jayson Payne, Jermil Sadler and Dirk Colbry
To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that matches the…
Abstract
Purpose
To report on the evaluation of error of a face matching system consisting of a 3D sensor for obtaining the surface of the face, and a two‐stage matching algorithm that matches the sensed surface to a model surface.
Design/methodology/approach
Rigid mannikin face that was, otherwise, fairly realistic was obtained, and several sensing and matching experiments were performed. Pose position, lighting and face color were controlled.
Findings
The combined sensor‐matching system typically reported correct face surface matches with trimmed RMS error of 0.5 mm or less for a generous volume of parameters, including roll, pitch, yaw, position, lighting, and facecolor. Error accelerated beyond this “approximately frontal” set of parameters. Mannikin results are compared to results with thousands of cases of real faces. The sensor accuracy is not a limiting component of the system, but supports the application well.
Practical implications
The sensor supports the application well (except for the current cost). Equal error rates achieved appear to be practical for face verification.
Originality/value
No similar report is known for sensing faces.
Details
Keywords
Ludger Schmidt, Jens Hegenberg and Liubov Cramar
To avoid harm to humans, environment, and capital goods, hazardous or explosive gases that are possibly escaping from industrial and infrastructure facilities of the gas and oil…
Abstract
Purpose
To avoid harm to humans, environment, and capital goods, hazardous or explosive gases that are possibly escaping from industrial and infrastructure facilities of the gas and oil processing industry have to be detected and located quickly and reliably. Project RoboGasInspector aims at the development and evaluation of a human-robot system that applies autonomous robots equipped with remote gas detection devices to detect and locate gas leaks. This article aims to focus on the usability of telemanipulation in this context.
Design/methodology/approach
This paper presents four user studies concerning human-robot interfaces for teleoperation in industrial inspection tasks. Their purpose is to resolve contradictory scientific findings regarding aspects of teleoperation and to verify functionality, usability, and technology acceptance of the designed solution in the actual context of use. Therefore, aspects concerning teleoperation that were separately examined before are evaluated in an integrated way. Considered aspects are influence of media technology on telepresence, simulator sickness and head slaved camera control, usability of different input devices for telemanipulation, and identification of intuitive gestures for teleoperation of mobile robots.
Findings
In general, the implemented interaction concepts perform better compared to conventional ones used in contemporary, actually applied robot systems. Otherwise, reasons are analyzed and approaches for further improvements are discussed. Exemplary results are given for each study.
Originality/value
The solution combines several technical approaches that are so far separately examined. Each approach is transferred to the innovative domain of industrial inspections and its applicability in this context is verified. New findings give design recommendations for remote workplaces of robot operators.
Details
Keywords
Cezary Zieliński, Włodzimierz Kasprzak, Tomasz Kornuta, Wojciech Szynkiewicz, Piotr Trojanek, Michał Walęcki, Tomasz Winiarski and Teresa Zielińska
Machining fixtures must fit exactly the work piece to support it appropriately. Even slight change in the design of the work piece renders the costly fixture useless. Substitution…
Abstract
Purpose
Machining fixtures must fit exactly the work piece to support it appropriately. Even slight change in the design of the work piece renders the costly fixture useless. Substitution of traditional fixtures by a programmable multi‐robot system supporting the work pieces requires a specific control system and a specific programming method enabling its quick reconfiguration. The purpose of this paper is to develop a novel approach to task planning (programming) of the reconfigurable fixture system.
Design/methodology/approach
The multi‐robot control system has been designed following a formal approach based on the definition of the system structure in terms of agents and transition function definition of their behaviour. Thus, a modular system resulted, enabling software parameterisation. This facilitated the introduction of changes brought about by testing different variants of the mechanical structure of the system. A novel approach to task planning (programming) of the reconfigurable fixture system has been developed. Its solution is based on constraint satisfaction problem approach. The planner takes into account physical, geometrical, and time‐related constraints.
Findings
Reconfigurable fixture programming is performed by supplying CAD definition of the work piece. Out of this data the positions of the robots and the locations of the supporting heads are automatically generated. This proved to be an effective programming method. The control system on the basis of the thus obtained plan effectively controls the behaviours of the supporting robots in both drilling and milling operations.
Originality/value
The shop‐floor experiments with the system showed that the work piece is held stiffly enough for both milling and drilling operations performed by the CNC machine. If the number of diverse work piece shapes is large, the reconfigurable fixture is a cost‐effective alternative to the necessary multitude of traditional fixtures. Moreover, the proposed design approach enables the control system to handle a variable number of controlled robots and accommodates possible changes to the hardware of the work piece supporting robots.
Details