Search results

1 – 10 of over 1000
To view the access options for this content please click here
Article

Wang Zhao and Long Lu

Facial expression provides abundant information for social interaction, and the analysis and utilization of facial expression data are playing a huge driving role in all…

Abstract

Purpose

Facial expression provides abundant information for social interaction, and the analysis and utilization of facial expression data are playing a huge driving role in all areas of society. Facial expression data can reflect people's mental state. In health care, the analysis and processing of facial expression data can promote the improvement of people's health. This paper introduces several important public facial expression databases and describes the process of facial expression recognition. The standard facial expression database FER2013 and CK+ were used as the main training samples. At the same time, the facial expression image data of 16 Chinese children were collected as supplementary samples. With the help of VGG19 and Resnet18 algorithm models of deep convolution neural network, this paper studies and develops an information system for the diagnosis of autism by facial expression data.

Design/methodology/approach

The facial expression data of the training samples are based on the standard expression database FER2013 and CK+. FER2013 and CK+ databases are a common facial expression data set, which is suitable for the research of facial expression recognition. On the basis of FER2013 and CK+ facial expression database, this paper uses the machine learning model support vector machine (SVM) and deep convolution neural network model CNN, VGG19 and Resnet18 to complete the facial expression recognition.

Findings

In this study, ten normal children and ten autistic patients were recruited to test the accuracy of the information system and the diagnostic effect of autism. After testing, the accuracy rate of facial expression recognition is 81.4 percent. This information system can easily identify autistic children. The feasibility of recognizing autism through facial expression is verified.

Research limitations/implications

The CK+ facial expression database contains some adult facial expression images. In order to improve the accuracy of facial expression recognition for children, more facial expression data of children will be collected as training samples. Therefore, the recognition rate of the information system will be further improved.

Originality/value

This research uses facial expression data and the latest artificial intelligence technology, which is advanced in technology. The diagnostic accuracy of autism is higher than that of traditional systems, so this study is innovative. Research topics come from the actual needs of doctors, and the contents and methods of research have been discussed with doctors many times. The system can diagnose autism as early as possible, promote the early treatment and rehabilitation of patients, and then reduce the economic and mental burden of patients. Therefore, this information system has good social benefits and application value.

Details

Library Hi Tech, vol. 38 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article

Wenjuan Shen and Xiaoling Li

recent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that…

Abstract

Purpose

recent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that conventional recurrent neural networks can only learn the time-series characteristics of expressions based on one-way propagation information.

Design/methodology/approach

To solve such limitation, this paper proposes a novel model based on bidirectional gated recurrent unit networks (Bi-GRUs) with two-way propagations, and the theory of identity mapping residuals is adopted to effectively prevent the problem of gradient disappearance caused by the depth of the introduced network. Since the Inception-V3 network model for spatial feature extraction has too many parameters, it is prone to overfitting during training. This paper proposes a novel facial expression recognition model to add two reduction modules to reduce parameters, so as to obtain an Inception-W network with better generalization.

Findings

Finally, the proposed model is pretrained to determine the best settings and selections. Then, the pretrained model is experimented on two facial expression data sets of CK+ and Oulu- CASIA, and the recognition performance and efficiency are compared with the existing methods. The highest recognition rate is 99.6%, which shows that the method has good recognition accuracy in a certain range.

Originality/value

By using the proposed model for the applications of facial expression, the high recognition accuracy and robust recognition results with lower time consumption will help to build more sophisticated applications in real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article

Kuan Cheng Lin, Tien‐Chi Huang, Jason C. Hung, Neil Y. Yen and Szu Ju Chen

This study aims to introduce an affective computing‐based method of identifying student understanding throughout a distance learning course.

Abstract

Purpose

This study aims to introduce an affective computing‐based method of identifying student understanding throughout a distance learning course.

Design/methodology/approach

The study proposed a learning emotion recognition model that included three phases: feature extraction and generation, feature subset selection and emotion recognition. Features are extracted from facial images and transform a given measument of facial expressions to a new set of features defining and computing by eigenvectors. Feature subset selection uses the immune memory clone algorithms to optimize the feature selection. Emotion recognition uses a classifier to build the connection between facial expression and learning emotion.

Findings

Experimental results using the basic expression of facial expression recognition research database, JAFFE, show that the proposed facial expression recognition method has high classification performance. The experiment results also show that the recognition of spontaneous facial expressions is effective in the synchronous distance learning courses.

Originality/value

The study shows that identifying student comprehension based on facial expression recognition in synchronous distance learning courses is feasible. This can help instrutors understand the student comprehension real time. So instructors can adapt their teaching materials and strategy to fit with the learning status of students.

To view the access options for this content please click here
Article

Fowei Wang, Bo Shen, Shaoyuan Sun and Zidong Wang

The purpose of this paper is to improve the accuracy of the facial expression recognition by using genetic algorithm (GA) with an appropriate fitness evaluation function…

Abstract

Purpose

The purpose of this paper is to improve the accuracy of the facial expression recognition by using genetic algorithm (GA) with an appropriate fitness evaluation function and Pareto optimization model with two new objective functions.

Design/methodology/approach

To achieve facial expression recognition with high accuracy, the Haar-like features representation approach and the bilateral filter are first used to preprocess the facial image. Second, the uniform local Gabor binary patterns are used to extract the facial feature so as to reduce the feature dimension. Third, an improved GA and Pareto optimization approach are used to select the optimal significant features. Fourth, the random forest classifier is chosen to achieve the feature classification. Subsequently, some comparative experiments are implemented. Finally, the conclusion is drawn and some future research topics are pointed out.

Findings

The experiment results show that the proposed facial expression recognition algorithm outperforms ones in the existing literature in terms of both the actuary and computational time.

Originality/value

The GA and Pareto optimization algorithm are combined to select the optimal significant feature. To improve the accuracy of the facial expression recognition, the GA is improved by adjusting an appropriate fitness evaluation function, and a new Pareto optimization model is proposed that contains two objective functions indicating the achievements in minimizing within-class variations and in maximizing between-class variations.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article

Rosa Angela Fabio, Sonia Esposito, Cristina Carrozza, Gaetana Pino and Tindara Caprì

Various studies have examined the role of executive functions in autism, but there is a lack of research in the current literature on cognitive flexibility in autism…

Abstract

Purpose

Various studies have examined the role of executive functions in autism, but there is a lack of research in the current literature on cognitive flexibility in autism spectrum disorders (ASD). The purpose of this study is to investigate whether cognitive flexibility deficits could be related to facial emotion recognition deficits in ASD.

Design/methodology/approach

In total, 20 children with ASD and 20 typically developing children, matched for intelligence quotient and gender, were examined both in facial emotion recognition tasks and in cognitive flexibility tasks through the dimensional change card sorting task.

Findings

Despite cognitive flexibility not being a core deficit in ASD, impaired cognitive flexibility is evident in the present research. Results show that cognitive flexibility is related to facial emotion recognition and support the hypothesis of an executive specific deficit in children with autism.

Research limitations/implications

One of the limit is the use of just one cognitive test to measure cognitive flexibility and facial recognition. This could be important to be taken into account in the new research. By increasing the number of common variables assessing cognitive flexibility, this will allow for a better comparison between studies to characterize impairment in cognitive flexibility in ASD.

Practical implications

Investigating impairment in cognitive flexibility may help to plan training intervention based on the induction of flexibility.

Social implications

If the authors implement cognitive flexibility people with ASD can have also an effect on their social behavior and overcome the typical and repetitive behaviors that are the hallmark of ASD.

Originality/value

The originality is to relate cognitive flexibility deficits to facial emotion.

Details

Advances in Autism, vol. 6 no. 3
Type: Research Article
ISSN: 2056-3868

Keywords

To view the access options for this content please click here
Article

Ziaul Haque Choudhury and M. Munir Ahamed Rabbani

Nowadays, the use of forged e-passport is increasing, which is threatening national security. It is important to improve the national security against international crime…

Abstract

Purpose

Nowadays, the use of forged e-passport is increasing, which is threatening national security. It is important to improve the national security against international crime or terrorism. There is a weak verification process caused by lack of identification processes such as a physical check, biometric check and electronic check. The e-passport can prevent the passport cloning or forging resulting from the illegal immigration. The paper aims to discuss these issues.

Design/methodology/approach

This paper focuses on face recognition to improve the biometric authentication for an e-passport, and it also introduces facial permanent mark detection from the makeup or cosmetic-applied faces, twins and similar faces. An algorithm is proposed to detect the cosmetic-applied facial permanent marks such as mole, freckle, birthmark and pockmark. Active Shape Model into Active Appearance Model using Principal Component Analysis is applied to detect the facial landmarks. Facial permanent marks are detected by applying the Canny edge detector and Gradient Field Histogram of Oriented Gradient.

Findings

This paper demonstrated an algorithm and proposed facial marks detection from cosmetic or makeup-applied faces for a secure biometric passport in the field of personal identification for national security. It also presented to detect and identify identical twins and similar faces. This paper presented facial marks detection from the cosmetic-applied face, which can be mixed with traditional methods. However, the use of the proposed technique faced some challenges due to the use of cosmetic. The combinations of the algorithm for facial mark recognition matching with classical methods were able to attain lower errors in this proposed experiment.

Originality/value

The proposed method will enhance the national security and it will improve the biometric authentication for the e-passport. The proposed algorithm is capable of identifying facial marks from cosmetic-applied faces accurately, with less false positives. The proposed technique shows the best results.

Details

International Journal of Intelligent Unmanned Systems, vol. 8 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

To view the access options for this content please click here
Book part

Kerstin Limbrecht-Ecklundt, Holger Hoffmann, Steffen Walter, Sascha Gruss, David Hrabal and Harald C. Traue

Emotion recognition and emotion expression/regulation are important aspects of emotional intelligence (EI). Although the construct of EI is widely used and its components…

Abstract

Emotion recognition and emotion expression/regulation are important aspects of emotional intelligence (EI). Although the construct of EI is widely used and its components are part of many investigations, there is still no sufficient picture set that can be used for systematic research of facial emotion recognition and practical applications of individual assessments. In this research we present a new Facial Action Coding System validated picture set consisting of six emotions (anger, disgust, fear, happiness, sadness, and surprise). Basic principles of stimulus development and evaluation process are described. The PFA-U can be used for future studies in organization for the assessment of emotion recognition, emotion stimulation, and emotion management.

Details

Individual Sources, Dynamics, and Expressions of Emotion
Type: Book
ISBN: 978-1-78190-889-1

Keywords

To view the access options for this content please click here
Article

Min Hao, Guangyuan Liu, Desheng Xie, Ming Ye and Jing Cai

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans…

Abstract

Purpose

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans respond to event-related observations in their daily lives is especially important.

Design/methodology/approach

This paper uses non-intrusive technology (hyperspectral imaging [HSI]) for happiness recognition. Experimental setup is conducted for data collection in real-life environments where observers are showing spontaneous expressions of emotions (calm, happy, unhappy: angry) during the experimental process. Based on facial imaging captured from HSI, this work collects our emotional database defined as SWU Happiness DB and studies whether the physiological signal (i.e. tissue oxygen saturation [StO2], obtained by an optical absorption model) can be used to recognize observer happiness automatically. It proposes a novel method to capture local dynamic patterns (LDP) in facial regions, introducing local variations in facial StO2 to fully use physiological characteristics with regard to hyperspectral patterns. Further, it applies a linear discriminant analysis-based support vector machine to recognize happiness patterns.

Findings

The results show that the best classification accuracy is 97.89 per cent, objectively demonstrating a feasible application of LDP features on happiness recognition.

Originality/value

This paper proposes a novel feature (i.e. LDP) to represent the local variations in facial StO2 for modeling the active happiness. It provides a possible extension to the promising practical application.

Details

Engineering Computations, vol. 37 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article

Eunhwa Jung and Kyungho Hong

This study aims at a biometric verification based on facial profile images for mobile security. The modern technology of mobile Internet devices and smart phones such as…

Abstract

Purpose

This study aims at a biometric verification based on facial profile images for mobile security. The modern technology of mobile Internet devices and smart phones such as the iPhone series and Galaxy phone series has revealed the development of information technology of input and output devices as high-definition multimedia interface. The development of information technology requires novel biometric verification for personal identification or authentication in mobile security, especially in Internet banking and mobile Internet access. Our study deals with a biometric verification based on facial profile images for mobile security.

Design/methodology/approach

The product of cellphones with built-in cameras gives us the opportunity of the biometric verification to recognize faces, fingerprints and biological features without any other special devices. Our study focuses on recognizing the left and right facial profile images as well as the front facial images as a biometric verification of personal identification and authentication for mobile security, which can be captured by smart phone devices such as iPhone 4 and Galaxy S2.

Findings

As the recognition technique of the facial profile images for a biometric verification in mobile security is a very simple, relatively easy to use and inexpensive, it can be easily applied to personal mobile phone identification and authentication instead of passwords, keys or other methods. The biometric system can also be used as one of multiple verification techniques for personal recognition in a multimodal biometric system. Our experimental data are taken from persons of all ages, ranging from children to senior citizens.

Originality/value

As the recognition technique of the facial profile images for a biometric verification in mobile security is very simple, relatively easy to use and inexpensive, it can be easily applied to personal mobile phone identification and authentication instead of passwords, keys or other methods. The biometric system can also be used as one of multiple verification techniques for personal recognition in a multimodal biometric system. Our experimental data are taken from persons of all ages, ranging from children to senior citizens.

Details

Journal of Systems and Information Technology, vol. 17 no. 1
Type: Research Article
ISSN: 1328-7265

Keywords

To view the access options for this content please click here
Book part

Matteo Sorci, Thomas Robin, Javier Cruz, Michel Bierlaire, J.-P. Thiran and Gianluca Antonini

Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developed Discrete Choice Models (DCM) to…

Abstract

Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developed Discrete Choice Models (DCM) to capture the human perception of facial expressions. In a first step, the static case is treated, that is modelling perception of facial images. Image information is extracted using a computer vision tool called Active Appearance Model (AAM). DCMs attributes are based on the Facial Action Coding System (FACS), Expression Descriptive Units (EDUs) and outputs of AAM. Some behavioural data have been collected using an Internet survey, where respondents are asked to label facial images from the Cohn–Kanade database with expressions. Different models were estimated by likelihood maximization using the obtained data. In a second step, the proposed static discrete choice framework is extended to the dynamic case, which considers facial video instead of images. The model theory is described and another Internet survey is currently conducted in order to obtain expressions labels on videos. In this second Internet survey, videos come from the Cohn–Kanade database and the Facial Expressions and Emotions Database (FEED).

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

1 – 10 of over 1000