Search results
1 – 10 of over 55000Hima Bindu and Manjunathachari K.
This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial…
Abstract
Purpose
This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial recognition (FR) systems play a vital part in several applications such as surveillance, access control and image understanding. Accordingly, various face recognition methods have been developed in the literature, but the applicability of these algorithms is restricted because of unsatisfied accuracy. So, the improvement of face recognition is significantly important for the current trend.
Design/methodology/approach
This paper proposes a face recognition system through feature extraction and classification. The proposed model extracts the local and the global feature of the image. The local features of the image are extracted using the kernel based scale invariant feature transform (K-SIFT) model and the global features are extracted using the proposed m-Co-HOG model. (Co-HOG: co-occurrence histograms of oriented gradients) The proposed m-Co-HOG model has the properties of the Co-HOG algorithm. The feature vector database contains combined local and the global feature vectors derived using the K-SIFT model and the proposed m-Co-HOG algorithm. This paper proposes a probabilistic neuro-fuzzy classifier system for the finding the identity of the person from the extracted feature vector database.
Findings
The face images required for the simulation of the proposed work are taken from the CVL database. The simulation considers a total of 114 persons form the CVL database. From the results, it is evident that the proposed model has outperformed the existing models with an improved accuracy of 0.98. The false acceptance rate (FAR) and false rejection rate (FRR) values of the proposed model have a low value of 0.01.
Originality/value
This paper proposes a face recognition system with proposed m-Co-HOG vector and the hybrid neuro-fuzzy classifier. Feature extraction was based on the proposed m-Co-HOG vector for extracting the global features and the existing K-SIFT model for extracting the local features from the face images. The proposed m-Co-HOG vector utilizes the existing Co-HOG model for feature extraction, along with a new color gradient decomposition method. The major advantage of the proposed m-Co-HOG vector is that it utilizes the color features of the image along with other features during the histogram operation.
Details
Keywords
Dinesh Kumar D.S. and P.V. Rao
The purpose of this paper is to incorporate a multimodal biometric system, which plays a major role in improving the accuracy and reducing FAR and FRR performance metrics…
Abstract
Purpose
The purpose of this paper is to incorporate a multimodal biometric system, which plays a major role in improving the accuracy and reducing FAR and FRR performance metrics. Biometrics plays a major role in several areas including military applications because of robustness of the system. Speech and face data are considered as key elements that are commonly used for multimodal biometric applications, as they are simultaneously acquired from camera and microphone.
Design/methodology/approach
In this proposed work, Viola‒Jones algorithm is used for face detection, and Local Binary Pattern consists of texture operators that perform thresholding operation to extract the features of face. Mel-frequency cepstral coefficients exploit the performances of voice data, and median filter is used for removing noise. KNN classifier is used for fusion of both face and voice. The proposed method produces better results in noisy environment with better accuracy. In this proposed method, from the database, 120 face and voice samples are trained and tested with simulation results using MATLAB tool that improves performance in better recognition and accuracy.
Findings
The algorithms perform better for both face and voice recognition. The outcome of this work provides better accuracy up to 98 per cent with reduced FAR of 0.5 per cent and FRR of 0.75 per cent.
Originality/value
The algorithms perform better for both face and voice recognition. The outcome of this work provides better accuracy up to 98 per cent with reduced FAR of 0.5 per cent and FRR of 0.75 per cent.
Rainhard Dieter Findling and Rene Mayrhofer
Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments…
Abstract
Purpose
Personal mobile devices currently have access to a significant portion of their user's private sensitive data and are increasingly used for processing mobile payments. Consequently, securing access to these mobile devices is a requirement for securing access to the sensitive data and potentially costly services. The authors propose and evaluate a first version of a pan shot face unlock method: a mobile device unlock mechanism using all information available from a 180° pan shot of the device around the user's head – utilizing biometric face information as well as sensor data of built‐in sensors of the device. The paper aims to discuss these issues.
Design/methodology/approach
This approach uses grayscale 2D images, on which the authors perform frontal and profile face detection. For face recognition, the authors evaluate different support vector machines and neural networks. To reproducibly evaluate this pan shot face unlock toolchain, the authors assembled the 2013 Hagenberg stereo vision pan shot face database, which the authors describe in detail in this article.
Findings
Current results indicate that the approach to face recognition is sufficient for further usage in this research. However, face detection is still error prone for the mobile use case, which consequently decreases the face recognition performance as well.
Originality/value
The contributions of this paper include: introducing pan shot face unlock as an approach to increase security and usability during mobile device authentication; introducing the 2013 Hagenberg stereo vision pan shot face database; evaluating this current pan shot face unlock toolchain using the newly created face database.
Details
Keywords
Hasna El Alaoui El Abdallaoui, Abdelaziz El Fazziki, Fatima Zohra Ennaji and Mohamed Sadgal
The pervasiveness of mobile devices has led to the permanent use of their new features by the crowd to perform different tasks. The purpose of this paper is to exploit this…
Abstract
Purpose
The pervasiveness of mobile devices has led to the permanent use of their new features by the crowd to perform different tasks. The purpose of this paper is to exploit this massive consumption of new information technologies supported by the concept of crowdsourcing in a governmental context to access citizens as a source of ideas and support. The aim is to find out how crowdsourcing combined with the new technologies can constitute a great force to enhance the performance of the suspect investigation process.
Design/methodology/approach
This paper provides a structured view of a suspect investigation framework, especially based on the image processing techniques, including the automatic face analysis. This crowdsourcing framework is mainly based on the personal description as an identification technique to facilitate the suspect investigation and the use of MongoDB as a document-oriented database to store the information.
Findings
The case study demonstrates that the proposed framework provides satisfying results in each step of the identification process. The experimental results show how the combination between the crowdsourcing concept and the mobile devices pervasiveness has fruitfully strengthened the identification process with the use of automatic face analysis techniques.
Originality/value
A review of the literature has shown that previous work has focused mainly on the presentation of forensic techniques that can be used in the investigation process steps. However, this paper implements a complete framework whose whole strength is based on the crowdsourcing concept as a new paradigm used by institutions to solve many organizational problems.
Details
Keywords
This research project focuses on developing techniques and technologies for automatically identifying human faces from images in the situations where face sample collections in…
Abstract
Purpose
This research project focuses on developing techniques and technologies for automatically identifying human faces from images in the situations where face sample collections in the database as well as in the input query images are “as is”, i.e. no standard data collection environment is available. The developed method can also be used in other biometric applications.
Design/methodology/approach
The specific method presented in this paper is called scale independent identification (SII). SII allows direct “comparison” between two images in terms of whether the two objects (e.g. faces) in the two images are the same object (i.e. the same individual). SII is developed by extensively using the matrix computation theory and in particular, the singular value decomposition theory.
Findings
It is found that almost all the existing methods in the literature or technologies in the market require that a normalization in scale be done before any identification processing. However, it is also found that normalization in scale not only adds additional processing complexity, but also may reduce the identification accuracy. In addition, it is difficult to anticipate an “optimal” scale in advance. The developed SII complements the existing methods in all these aspects.
Research limitations/implications
The only limitation which is also the limitation for many other biometric identification methods is that each object (e.g. individual in human face identification) must have a sufficient number of training samples collected before the method works well.
Practical implications
SII is particularly suitable in law enforcement and/or intelligence applications in which it is difficult or impossible to collect data in a standard, “clean” environment.
Originality/value
The SII method is new, and the paper should be interesting to researchers or engineers in this area, and should also be interesting to companies developing any biometrics‐based identification technologies as well as government agencies.
Details
Keywords
Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications…
Abstract
Purpose
Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications, facial image retrieval has received much attention in recent years. Similar to face recognition, finding appropriate image representation is a vital step for a successful facial image retrieval system. Recently, many efficient image feature descriptors have been proposed and some of them have been applied to face recognition. It is valuable to have comparative studies of different feature descriptors in facial image retrieval. And more importantly, how to fuse multiple features is a significant task which can have a substantial impact on the overall performance of the CBIR system. The purpose of this paper is to propose an efficient face image retrieval strategy.
Design/methodology/approach
In this paper, three different feature description methods have been investigated for facial image retrieval, including local binary pattern, curvelet transform and pyramid histogram of oriented gradient. The problem of large dimensionalities of the extracted features is addressed by employing a manifold learning method called spectral regression. A decision level fusion scheme fuzzy aggregation is applied by combining the distance metrics from the respective dimension reduced feature spaces.
Findings
Empirical evaluations on several face databases illustrate that dimension reduced features are more efficient for facial retrieval and the fuzzy aggregation fusion scheme can offer much enhanced performance. A 98 per cent rank 1 retrieval accuracy was obtained for the AR faces and 91 per cent for the FERET faces, showing that the method is robust against different variations like pose and occlusion.
Originality/value
The proposed method for facial image retrieval has a promising potential of designing a real‐world system for many applications, particularly in forensics and biometrics.
Details
Keywords
Ihab Zaqout and Mones Al-Hanjori
The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to…
Abstract
Purpose
The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to automatically localize the face in the image and, if necessary, identify the person in the face. Interests in the procedures underlying the process of localization and individual’s recognition are quite significant in connection with the variety of their practical application in such areas as security systems, verification, forensic expertise, teleconferences, computer games, etc. This paper aims to recognize facial images efficiently. An averaged-feature based technique is proposed to reduce the dimensions of the multi-expression facial features. The classifier model is generated using a supervised learning algorithm called a back-propagation neural network (BPNN), implemented on a MatLab R2017. The recognition rate and accuracy of the proposed methodology is comparable with other methods such as the principle component analysis and linear discriminant analysis with the same data set. In total, 150 faces subjects are selected from the Olivetti Research Laboratory (ORL) data set, resulting 95.6 and 85 per cent recognition rate and accuracy, respectively, and 165 faces subjects from the Yale data set, resulting 95.5 and 84.4 per cent recognition rate and accuracy, respectively.
Design/methodology/approach
Averaged-feature based approach (dimension reduction) and BPNN (generate supervised classifier).
Findings
The recognition rate is 95.6 per cent and recognition accuracy is 85 per cent for the ORL data set, whereas the recognition rate is 95.5 per cent and recognition accuracy is 84.4 per cent for the Yale data set.
Originality/value
Averaged-feature based method.
Details
Keywords
Yang Xin, Yi Liu, Zhi Liu, Xuemei Zhu, Lingshuang Kong, Dongmei Wei, Wei Jiang and Jun Chang
Biometric systems are widely used for face recognition. They have rapidly developed in recent years. Compared with other approaches, such as fingerprint recognition, handwriting…
Abstract
Purpose
Biometric systems are widely used for face recognition. They have rapidly developed in recent years. Compared with other approaches, such as fingerprint recognition, handwriting verification and retinal and iris scanning, face recognition is more straightforward, user friendly and extensively used. The aforementioned approaches, including face recognition, are vulnerable to malicious attacks by impostors; in such cases, face liveness detection comes in handy to ensure both accuracy and robustness. Liveness is an important feature that reflects physiological signs and differentiates artificial from real biometric traits. This paper aims to provide a simple path for the future development of more robust and accurate liveness detection approaches.
Design/methodology/approach
This paper discusses about introduction to the face biometric system, liveness detection in face recognition system and comparisons between the different discussed works of existing measures.
Originality/value
This paper presents an overview, comparison and discussion of proposed face liveness detection methods to provide a reference for the future development of more robust and accurate liveness detection approaches.
Details
Keywords
To study the mathematical image coding approaches used in two types of biometric systems, and the physical nature of those biometrics.
Abstract
Purpose
To study the mathematical image coding approaches used in two types of biometric systems, and the physical nature of those biometrics.
Design/methodology/approach
Gives details of algorithms used to encode data from images in established and new automatic iris recognition systems. Then examines face recognition techniques based on geometry, texture and three‐dimensional data.
Findings
Most commercial iris recognition systems are based on the algorithms developed by one man, John Daugman. Whilst iris systems can be used to check a person's identity against a large database of enrolled people, face recognition systems are currently only capable of use in one‐to‐one recognition mode, or in identification mode against a very small database. The iris is very distinctive and stable over time, but the face is much more variable and therefore difficult to identify with accuracy.
Originality/value
Provides the general scientific reader with some insight into the specialised field of biometric recognition.
Details
Keywords
Warot Moungsouy, Thanawat Tawanbunjerd, Nutcha Liamsomboon and Worapan Kusakunniran
This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face…
Abstract
Purpose
This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face recognition. So, the proposed solution is developed to recognize human faces on any available facial components which could be varied depending on wearing or not wearing a mask.
Design/methodology/approach
The proposed solution is developed based on the FaceNet framework, aiming to modify the existing facial recognition model to improve the performance of both scenarios of mask-wearing and without mask-wearing. Then, simulated masked-face images are computed on top of the original face images, to be used in the learning process of face recognition. In addition, feature heatmaps are also drawn out to visualize majority of parts of facial images that are significant in recognizing faces under mask-wearing.
Findings
The proposed method is validated using several scenarios of experiments. The result shows an outstanding accuracy of 99.2% on a scenario of mask-wearing faces. The feature heatmaps also show that non-occluded components including eyes and nose become more significant for recognizing human faces, when compared with the lower part of human faces which could be occluded under masks.
Originality/value
The convolutional neural network based solution is tuned up for recognizing human faces under a scenario of mask-wearing. The simulated masks on original face images are augmented for training the face recognition model. The heatmaps are then computed to prove that features generated from the top half of face images are correctly chosen for the face recognition.
Details