Search results

1 – 10 of over 15000
Article
Publication date: 28 June 2022

Akhil Kumar

This work aims to present a deep learning model for face mask detection in surveillance environments such as automatic teller machines (ATMs), banks, etc. to identify persons…

Abstract

Purpose

This work aims to present a deep learning model for face mask detection in surveillance environments such as automatic teller machines (ATMs), banks, etc. to identify persons wearing face masks. In surveillance environments, complete visibility of the face area is a guideline, and criminals and law offenders commit crimes by hiding their faces behind a face mask. The face mask detector model proposed in this work can be used as a tool and integrated with surveillance cameras in autonomous surveillance environments to identify and catch law offenders and criminals.

Design/methodology/approach

The proposed face mask detector is developed by integrating the residual network (ResNet)34 feature extractor on top of three You Only Look Once (YOLO) detection layers along with the usage of the spatial pyramid pooling (SPP) layer to extract a rich and dense feature map. Furthermore, at the training time, data augmentation operations such as Mosaic and MixUp have been applied to the feature extraction network so that it can get trained with images of varying complexities. The proposed detector is trained and tested over a custom face mask detection dataset consisting of 52,635 images. For validation, comparisons have been provided with the performance of YOLO v1, v2, tiny YOLO v1, v2, v3 and v4 and other benchmark work present in the literature by evaluating performance metrics such as precision, recall, F1 score, mean average precision (mAP) for the overall dataset and average precision (AP) for each class of the dataset.

Findings

The proposed face mask detector achieved 4.75–9.75 per cent higher detection accuracy in terms of mAP, 5–31 per cent higher AP for detection of faces with masks and, specifically, 2–30 per cent higher AP for detection of face masks on the face region as compared to the tested baseline variants of YOLO. Furthermore, the usage of the ResNet34 feature extractor and SPP layer in the proposed detection model reduced the training time and the detection time. The proposed face mask detection model can perform detection over an image in 0.45 s, which is 0.2–0.15 s lesser than that for other tested YOLO variants, thus making the proposed detection model perform detections at a higher speed.

Research limitations/implications

The proposed face mask detector model can be utilized as a tool to detect persons with face masks who are a potential threat to the automatic surveillance environments such as ATMs, banks, airport security checks, etc. The other research implication of the proposed work is that it can be trained and tested for other object detection problems such as cancer detection in images, fish species detection, vehicle detection, etc.

Practical implications

The proposed face mask detector can be integrated with automatic surveillance systems and used as a tool to detect persons with face masks who are potential threats to ATMs, banks, etc. and in the present times of COVID-19 to detect if the people are following a COVID-appropriate behavior of wearing a face mask or not in the public areas.

Originality/value

The novelty of this work lies in the usage of the ResNet34 feature extractor with YOLO detection layers, which makes the proposed model a compact and powerful convolutional neural-network-based face mask detector model. Furthermore, the SPP layer has been applied to the ResNet34 feature extractor to make it able to extract a rich and dense feature map. The other novelty of the present work is the implementation of Mosaic and MixUp data augmentation in the training network that provided the feature extractor with 3× images of varying complexities and orientations and further aided in achieving higher detection accuracy. The proposed model is novel in terms of extracting rich features, performing augmentation at the training time and achieving high detection accuracy while maintaining the detection speed.

Details

Data Technologies and Applications, vol. 57 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 19 June 2017

Yang Xin, Yi Liu, Zhi Liu, Xuemei Zhu, Lingshuang Kong, Dongmei Wei, Wei Jiang and Jun Chang

Biometric systems are widely used for face recognition. They have rapidly developed in recent years. Compared with other approaches, such as fingerprint recognition, handwriting…

Abstract

Purpose

Biometric systems are widely used for face recognition. They have rapidly developed in recent years. Compared with other approaches, such as fingerprint recognition, handwriting verification and retinal and iris scanning, face recognition is more straightforward, user friendly and extensively used. The aforementioned approaches, including face recognition, are vulnerable to malicious attacks by impostors; in such cases, face liveness detection comes in handy to ensure both accuracy and robustness. Liveness is an important feature that reflects physiological signs and differentiates artificial from real biometric traits. This paper aims to provide a simple path for the future development of more robust and accurate liveness detection approaches.

Design/methodology/approach

This paper discusses about introduction to the face biometric system, liveness detection in face recognition system and comparisons between the different discussed works of existing measures.

Originality/value

This paper presents an overview, comparison and discussion of proposed face liveness detection methods to provide a reference for the future development of more robust and accurate liveness detection approaches.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 August 2019

Ziaul Haque Choudhury and M. Munir Ahamed Rabbani

Nowadays, the use of forged e-passport is increasing, which is threatening national security. It is important to improve the national security against international crime or…

Abstract

Purpose

Nowadays, the use of forged e-passport is increasing, which is threatening national security. It is important to improve the national security against international crime or terrorism. There is a weak verification process caused by lack of identification processes such as a physical check, biometric check and electronic check. The e-passport can prevent the passport cloning or forging resulting from the illegal immigration. The paper aims to discuss these issues.

Design/methodology/approach

This paper focuses on face recognition to improve the biometric authentication for an e-passport, and it also introduces facial permanent mark detection from the makeup or cosmetic-applied faces, twins and similar faces. An algorithm is proposed to detect the cosmetic-applied facial permanent marks such as mole, freckle, birthmark and pockmark. Active Shape Model into Active Appearance Model using Principal Component Analysis is applied to detect the facial landmarks. Facial permanent marks are detected by applying the Canny edge detector and Gradient Field Histogram of Oriented Gradient.

Findings

This paper demonstrated an algorithm and proposed facial marks detection from cosmetic or makeup-applied faces for a secure biometric passport in the field of personal identification for national security. It also presented to detect and identify identical twins and similar faces. This paper presented facial marks detection from the cosmetic-applied face, which can be mixed with traditional methods. However, the use of the proposed technique faced some challenges due to the use of cosmetic. The combinations of the algorithm for facial mark recognition matching with classical methods were able to attain lower errors in this proposed experiment.

Originality/value

The proposed method will enhance the national security and it will improve the biometric authentication for the e-passport. The proposed algorithm is capable of identifying facial marks from cosmetic-applied faces accurately, with less false positives. The proposed technique shows the best results.

Details

International Journal of Intelligent Unmanned Systems, vol. 8 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 16 August 2021

V. Vinolin and M. Sucharitha

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images…

Abstract

Purpose

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images, can be created without leaving any visual clues about the alteration in the image. Image forensic field has introduced several forgery detection techniques, which effectively distinguish fake images from the original ones, to restore the trust in digital images. Among several forgery images, spliced images involving human faces are more unsafe. Hence, there is a need for a forgery detection approach to detect the spliced images.

Design/methodology/approach

This paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images. Initially, the human faces in the spliced images are detected using the Viola–Jones algorithm, from which the 3-dimensional (3D) shape of the face is established using landmark-based 3D morphable model (L3DMM), which estimates the light coefficients. Then, the distance measures, such as Bhattacharya, Seuclidean, Euclidean, Hamming, Chebyshev and correlation coefficients are determined from the light coefficients of the faces. These form the feature vector to the proposed Taylor-ROA-based DeepCNN, which determines the spliced images.

Findings

Experimental analysis using DSO-1, DSI-1, real dataset and hybrid dataset reveal that the proposed approach acquired the maximal accuracy, true positive rate (TPR) and true negative rate (TNR) of 99%, 98.88% and 96.03%, respectively, for DSO-1 dataset. The proposed method reached the performance improvement of 24.49%, 8.92%, 6.72%, 4.17%, 0.25%, 0.13%, 0.06%, and 0.06% in comparison to the existing methods, such as Kee and Farid's, shape from shading (SFS), random guess, Bo Peng et al., neural network, FOA-SVNN, CNN-based MBK, and Manoj Kumar et al., respectively, in terms of accuracy.

Originality/value

The Taylor-ROA is developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.

Details

Data Technologies and Applications, vol. 56 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 January 2006

Qin Li, King Hong Cheung, Jane You, Raymond Tong and Arthur Mak

Aims to develop an efficient and robust system for real‐time personal identification by automatic face recognition.

Abstract

Purpose

Aims to develop an efficient and robust system for real‐time personal identification by automatic face recognition.

Design/methodology/approach

A wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme are introduced to improve the computation efficiency in the face detection task. In addition, a Gabor‐based low feature dimensional pattern is proposed to deal with the face recognition problem.

Findings

The proposal of a wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme is effective to improve the computation efficiency in the face detection task. The introduction of a low feature dimensional pattern is powerful to cope with the transformed appearance‐based face recognition problem. In addition, the use of aggregated Gabor filter responses to represent face images provides a better solution to face feature extraction.

Research limitations/implications

Provides guidance in the design of automatic face recognition system for real‐time personal identification.

Practical implications

Biometrics recognition has been emerging as a new and effective identification technology that attains certain level of maturity. Among many body characteristics that have been used, face is one of the most commonly used characteristics and has drawn considerably large attentions. An automated system to confirm an individual's identity employing features of face is very attractive in many specialized fields.

Originality/value

Introduces a wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme to improve the computation efficiency in the face detection task. Introduces a Gabor‐based low feature dimensional pattern to deal with the face recognition problem.

Details

Sensor Review, vol. 26 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 30 August 2013

Rainhard Dieter Findling and Rene Mayrhofer

Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments…

Abstract

Purpose

Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments. Consequently, securing access to these mobile devices is a requirement for securing access to the sensitive data and potentially costly services. The authors propose and evaluate a first version of a pan shot face unlock method: a mobile device unlock mechanism using all information available from a 180° pan shot of the device around the user's head – utilizing biometric face information as well as sensor data of built‐in sensors of the device. The paper aims to discuss these issues.

Design/methodology/approach

This approach uses grayscale 2D images, on which the authors perform frontal and profile face detection. For face recognition, the authors evaluate different support vector machines and neural networks. To reproducibly evaluate this pan shot face unlock toolchain, the authors assembled the 2013 Hagenberg stereo vision pan shot face database, which the authors describe in detail in this article.

Findings

Current results indicate that the approach to face recognition is sufficient for further usage in this research. However, face detection is still error prone for the mobile use case, which consequently decreases the face recognition performance as well.

Originality/value

The contributions of this paper include: introducing pan shot face unlock as an approach to increase security and usability during mobile device authentication; introducing the 2013 Hagenberg stereo vision pan shot face database; evaluating this current pan shot face unlock toolchain using the newly created face database.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 6 May 2014

Yong-Hwan Lee, Hyochang Ahn, Han-Jin Cho and June-Hwan Lee

This paper holds a big advantage to enable to recognize faces, regardless of time and place. Also this provides an independent performance of smart phone, because of its process…

Abstract

Purpose

This paper holds a big advantage to enable to recognize faces, regardless of time and place. Also this provides an independent performance of smart phone, because of its process by a computer of third party not by that of the mobile device. In addition, it is desirable to minimize the expensive operations in mobile device with constraint computational power (i.e. battery consumption). Thus, the authors exclude the process of transmission failed from the input device. The paper aims to discuss these issues.

Design/methodology/approach

In this paper, the authors have proposed a new face detection and verification algorithm, based on skin color detection to enable extracting the face region from color images of the mobile phone. And then extracted the facial feature as eigenface, verified whether or not the identity of users is right, applied support vector machine to the region of detected face.

Findings

The experimental results for two datasets show that the proposed method achieves slightly higher efficiencies at the detection and verification of user identity, compared with other method, where varying lighting conditions with complex backgrounds, according to be fast and accurate than any other previous methods.

Originality/value

The proposed algorithm enables to implement fast and accurate search using triangle-square transformation for detection of human faces in a digital still color images, obtained by the mobile device camera under unconstraint environments, using advanced skin color model and characteristic points in a detected face.

Details

Journal of Systems and Information Technology, vol. 16 no. 2
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 29 July 2020

Asha Sukumaran and Thomas Brindha

The humans are gifted with the potential of recognizing others by their uniqueness, in addition with more other demographic characteristics such as ethnicity (or race), gender and…

Abstract

Purpose

The humans are gifted with the potential of recognizing others by their uniqueness, in addition with more other demographic characteristics such as ethnicity (or race), gender and age, respectively. Over the decades, a vast count of researchers had undergone in the field of psychological, biological and cognitive sciences to explore how the human brain characterizes, perceives and memorizes faces. Moreover, certain computational advancements have been developed to accomplish several insights into this issue.

Design/methodology/approach

This paper intends to propose a new race detection model using face shape features. The proposed model includes two key phases, namely. (a) feature extraction (b) detection. The feature extraction is the initial stage, where the face color and shape based features get mined. Specifically, maximally stable extremal regions (MSER) and speeded-up robust transform (SURF) are extracted under shape features and dense color feature are extracted as color feature. Since, the extracted features are huge in dimensions; they are alleviated under principle component analysis (PCA) approach, which is the strongest model for solving “curse of dimensionality”. Then, the dimensional reduced features are subjected to deep belief neural network (DBN), where the race gets detected. Further, to make the proposed framework more effective with respect to prediction, the weight of DBN is fine tuned with a new hybrid algorithm referred as lion mutated and updated dragon algorithm (LMUDA), which is the conceptual hybridization of lion algorithm (LA) and dragonfly algorithm (DA).

Findings

The performance of proposed work is compared over other state-of-the-art models in terms of accuracy and error performance. Moreover, LMUDA attains high accuracy at 100th iteration with 90% of training, which is 11.1, 8.8, 5.5 and 3.3% better than the performance when learning percentage (LP) = 50%, 60%, 70%, and 80%, respectively. More particularly, the performance of proposed DBN + LMUDA is 22.2, 12.5 and 33.3% better than the traditional classifiers DCNN, DBN and LDA, respectively.

Originality/value

This paper achieves the objective detecting the human races from the faces. Particularly, MSER feature and SURF features are extracted under shape features and dense color feature are extracted as color feature. As a novelty, to make the race detection more accurate, the weight of DBN is fine tuned with a new hybrid algorithm referred as LMUDA, which is the conceptual hybridization of LA and DA, respectively.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 January 2007

Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan

This paper presents a novel hands‐free control system for intelligent wheelchairs (IWs) based on visual recognition of head gestures.

10234

Abstract

Purpose

This paper presents a novel hands‐free control system for intelligent wheelchairs (IWs) based on visual recognition of head gestures.

Design/methodology/approach

A robust head gesture‐based interface (HGI), is designed for head gesture recognition of the RoboChair user. The recognised gestures are used to generate motion control commands to the low‐level DSP motion controller so that it can control the motion of the RoboChair according to the user's intention. Adaboost face detection algorithm and Camshift object tracking algorithm are combined in our system to achieve accurate face detection, tracking and gesture recognition in real time. It is intended to be used as a human‐friendly interface for elderly and disabled people to operate our intelligent wheelchair using their head gestures rather than their hands.

Findings

This is an extremely useful system for the users who have restricted limb movements caused by some diseases such as Parkinson's disease and quadriplegics.

Practical implications

In this paper, a novel integrated approach to real‐time face detection, tracking and gesture recognition is proposed, namely HGI.

Originality/value

It is an useful human‐robot interface for IWs.

Details

Industrial Robot: An International Journal, vol. 34 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture…

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 15000