Search results

1 – 10 of over 12000
Article
Publication date: 4 April 2016

Fowei Wang, Bo Shen, Shaoyuan Sun and Zidong Wang

The purpose of this paper is to improve the accuracy of the facial expression recognition by using genetic algorithm (GA) with an appropriate fitness evaluation function and…

Abstract

Purpose

The purpose of this paper is to improve the accuracy of the facial expression recognition by using genetic algorithm (GA) with an appropriate fitness evaluation function and Pareto optimization model with two new objective functions.

Design/methodology/approach

To achieve facial expression recognition with high accuracy, the Haar-like features representation approach and the bilateral filter are first used to preprocess the facial image. Second, the uniform local Gabor binary patterns are used to extract the facial feature so as to reduce the feature dimension. Third, an improved GA and Pareto optimization approach are used to select the optimal significant features. Fourth, the random forest classifier is chosen to achieve the feature classification. Subsequently, some comparative experiments are implemented. Finally, the conclusion is drawn and some future research topics are pointed out.

Findings

The experiment results show that the proposed facial expression recognition algorithm outperforms ones in the existing literature in terms of both the actuary and computational time.

Originality/value

The GA and Pareto optimization algorithm are combined to select the optimal significant feature. To improve the accuracy of the facial expression recognition, the GA is improved by adjusting an appropriate fitness evaluation function, and a new Pareto optimization model is proposed that contains two objective functions indicating the achievements in minimizing within-class variations and in maximizing between-class variations.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Book part
Publication date: 13 June 2013

Li Xiao, Hye-jin Kim and Min Ding

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing…

Abstract

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing scholars become more aware of the value of audio and visual data and the technologies required to reveal insights into marketing problems. This chapter aims to introduce marketing scholars into this field of research.Design/methodology/approach – This chapter reviews the current technology in audio and visual data analysis and discusses rewarding research opportunities in marketing using these data.Findings – Compared with traditional data like survey and scanner data, audio and visual data provides richer information and is easier to collect. Given these superiority, data availability, feasibility of storage, and increasing computational power, we believe that these data will contribute to better marketing practices with the help of marketing scholars in the near future.Practical implications: The adoption of audio and visual data in marketing practices will help practitioners to get better insights into marketing problems and thus make better decisions.Value/originality – This chapter makes first attempt in the marketing literature to review the current technology in audio and visual data analysis and proposes promising applications of such technology. We hope it will inspire scholars to utilize audio and visual data in marketing research.

Details

Review of Marketing Research
Type: Book
ISBN: 978-1-78190-761-0

Keywords

Article
Publication date: 27 August 2019

Min Hao, Guangyuan Liu, Desheng Xie, Ming Ye and Jing Cai

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans respond to…

Abstract

Purpose

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans respond to event-related observations in their daily lives is especially important.

Design/methodology/approach

This paper uses non-intrusive technology (hyperspectral imaging [HSI]) for happiness recognition. Experimental setup is conducted for data collection in real-life environments where observers are showing spontaneous expressions of emotions (calm, happy, unhappy: angry) during the experimental process. Based on facial imaging captured from HSI, this work collects our emotional database defined as SWU Happiness DB and studies whether the physiological signal (i.e. tissue oxygen saturation [StO2], obtained by an optical absorption model) can be used to recognize observer happiness automatically. It proposes a novel method to capture local dynamic patterns (LDP) in facial regions, introducing local variations in facial StO2 to fully use physiological characteristics with regard to hyperspectral patterns. Further, it applies a linear discriminant analysis-based support vector machine to recognize happiness patterns.

Findings

The results show that the best classification accuracy is 97.89 per cent, objectively demonstrating a feasible application of LDP features on happiness recognition.

Originality/value

This paper proposes a novel feature (i.e. LDP) to represent the local variations in facial StO2 for modeling the active happiness. It provides a possible extension to the promising practical application.

Details

Engineering Computations, vol. 37 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 March 1982

L. Vanderheydt, P. Vuylsteke, P. Jansen, A. Oosterlinck and H. Van den Berghe

This paper is part II of an overview of the work of the Pattern and Image Processing group of the Leuven University, presenting some of the industrial applications.

Abstract

This paper is part II of an overview of the work of the Pattern and Image Processing group of the Leuven University, presenting some of the industrial applications.

Details

Sensor Review, vol. 2 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 19 April 2023

Tarek Sallam

The purpose of this paper is to present a deep-learning-based beamforming method for phased array weather radars, especially whose antenna arrays are equipped with large number of…

76

Abstract

Purpose

The purpose of this paper is to present a deep-learning-based beamforming method for phased array weather radars, especially whose antenna arrays are equipped with large number of elements, for fast and accurate detection of weather observations.

Design/methodology/approach

The beamforming weights are computed by a convolutional neural network (CNN), which is trained with input–output pairs obtained from the Wiener solution.

Findings

To validate the robustness of the CNN-based beamformer, it is compared with the traditional beamforming methods, namely, Fourier (FR) beamforming and Capon beamforming. Moreover, the CNN is compared with a radial basis function neural network (RBFNN) which is a shallow type of neural network. It is shown that the CNN method has an excellent performance in radar signal simulations compared to the other methods. In addition to simulations, the robustness of the CNN beamformer is further validated by using real weather data collected by the phased array radar at Osaka University (PAR@OU) and compared to, besides the FR and RBFNN methods, the minimum mean square error beamforming method. It is shown that the CNN has the ability to rapidly and accurately detect the reflectivity of the PAR@OU with even less clutter level in comparison to the other methods.

Originality/value

Motivated by the inherit advantages of the CNN, this paper proposes the development of a CNN-based approach to the beamforming of PAR using both simulated and real data. In this paper, the CNN is trained on the optimum weights of Wiener solution. In simulations, it is applied on a large 32 × 32 planar phased array antenna. Moreover, it is operated on real data collected by the PAR@OU.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 June 2013

Kuan Cheng Lin, Tien‐Chi Huang, Jason C. Hung, Neil Y. Yen and Szu Ju Chen

This study aims to introduce an affective computing‐based method of identifying student understanding throughout a distance learning course.

1489

Abstract

Purpose

This study aims to introduce an affective computing‐based method of identifying student understanding throughout a distance learning course.

Design/methodology/approach

The study proposed a learning emotion recognition model that included three phases: feature extraction and generation, feature subset selection and emotion recognition. Features are extracted from facial images and transform a given measument of facial expressions to a new set of features defining and computing by eigenvectors. Feature subset selection uses the immune memory clone algorithms to optimize the feature selection. Emotion recognition uses a classifier to build the connection between facial expression and learning emotion.

Findings

Experimental results using the basic expression of facial expression recognition research database, JAFFE, show that the proposed facial expression recognition method has high classification performance. The experiment results also show that the recognition of spontaneous facial expressions is effective in the synchronous distance learning courses.

Originality/value

The study shows that identifying student comprehension based on facial expression recognition in synchronous distance learning courses is feasible. This can help instrutors understand the student comprehension real time. So instructors can adapt their teaching materials and strategy to fit with the learning status of students.

Article
Publication date: 6 November 2020

Wenjuan Shen and Xiaoling Li

recent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that conventional…

Abstract

Purpose

recent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that conventional recurrent neural networks can only learn the time-series characteristics of expressions based on one-way propagation information.

Design/methodology/approach

To solve such limitation, this paper proposes a novel model based on bidirectional gated recurrent unit networks (Bi-GRUs) with two-way propagations, and the theory of identity mapping residuals is adopted to effectively prevent the problem of gradient disappearance caused by the depth of the introduced network. Since the Inception-V3 network model for spatial feature extraction has too many parameters, it is prone to overfitting during training. This paper proposes a novel facial expression recognition model to add two reduction modules to reduce parameters, so as to obtain an Inception-W network with better generalization.

Findings

Finally, the proposed model is pretrained to determine the best settings and selections. Then, the pretrained model is experimented on two facial expression data sets of CK+ and Oulu- CASIA, and the recognition performance and efficiency are compared with the existing methods. The highest recognition rate is 99.6%, which shows that the method has good recognition accuracy in a certain range.

Originality/value

By using the proposed model for the applications of facial expression, the high recognition accuracy and robust recognition results with lower time consumption will help to build more sophisticated applications in real world.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 25 March 2020

Wang Zhao and Long Lu

Facial expression provides abundant information for social interaction, and the analysis and utilization of facial expression data are playing a huge driving role in all areas of…

Abstract

Purpose

Facial expression provides abundant information for social interaction, and the analysis and utilization of facial expression data are playing a huge driving role in all areas of society. Facial expression data can reflect people's mental state. In health care, the analysis and processing of facial expression data can promote the improvement of people's health. This paper introduces several important public facial expression databases and describes the process of facial expression recognition. The standard facial expression database FER2013 and CK+ were used as the main training samples. At the same time, the facial expression image data of 16 Chinese children were collected as supplementary samples. With the help of VGG19 and Resnet18 algorithm models of deep convolution neural network, this paper studies and develops an information system for the diagnosis of autism by facial expression data.

Design/methodology/approach

The facial expression data of the training samples are based on the standard expression database FER2013 and CK+. FER2013 and CK+ databases are a common facial expression data set, which is suitable for the research of facial expression recognition. On the basis of FER2013 and CK+ facial expression database, this paper uses the machine learning model support vector machine (SVM) and deep convolution neural network model CNN, VGG19 and Resnet18 to complete the facial expression recognition.

Findings

In this study, ten normal children and ten autistic patients were recruited to test the accuracy of the information system and the diagnostic effect of autism. After testing, the accuracy rate of facial expression recognition is 81.4 percent. This information system can easily identify autistic children. The feasibility of recognizing autism through facial expression is verified.

Research limitations/implications

The CK+ facial expression database contains some adult facial expression images. In order to improve the accuracy of facial expression recognition for children, more facial expression data of children will be collected as training samples. Therefore, the recognition rate of the information system will be further improved.

Originality/value

This research uses facial expression data and the latest artificial intelligence technology, which is advanced in technology. The diagnostic accuracy of autism is higher than that of traditional systems, so this study is innovative. Research topics come from the actual needs of doctors, and the contents and methods of research have been discussed with doctors many times. The system can diagnose autism as early as possible, promote the early treatment and rehabilitation of patients, and then reduce the economic and mental burden of patients. Therefore, this information system has good social benefits and application value.

Details

Library Hi Tech, vol. 38 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Book part
Publication date: 15 January 2010

Matteo Sorci, Thomas Robin, Javier Cruz, Michel Bierlaire, J.-P. Thiran and Gianluca Antonini

Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developed Discrete Choice Models (DCM) to capture…

Abstract

Facial expression recognition by human observers is affected by subjective components. Indeed there is no ground truth. We have developed Discrete Choice Models (DCM) to capture the human perception of facial expressions. In a first step, the static case is treated, that is modelling perception of facial images. Image information is extracted using a computer vision tool called Active Appearance Model (AAM). DCMs attributes are based on the Facial Action Coding System (FACS), Expression Descriptive Units (EDUs) and outputs of AAM. Some behavioural data have been collected using an Internet survey, where respondents are asked to label facial images from the Cohn–Kanade database with expressions. Different models were estimated by likelihood maximization using the obtained data. In a second step, the proposed static discrete choice framework is extended to the dynamic case, which considers facial video instead of images. The model theory is described and another Internet survey is currently conducted in order to obtain expressions labels on videos. In this second Internet survey, videos come from the Cohn–Kanade database and the Facial Expressions and Emotions Database (FEED).

Details

Choice Modelling: The State-of-the-art and The State-of-practice
Type: Book
ISBN: 978-1-84950-773-8

Article
Publication date: 6 September 2018

Ihab Zaqout and Mones Al-Hanjori

The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to…

Abstract

Purpose

The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to automatically localize the face in the image and, if necessary, identify the person in the face. Interests in the procedures underlying the process of localization and individual’s recognition are quite significant in connection with the variety of their practical application in such areas as security systems, verification, forensic expertise, teleconferences, computer games, etc. This paper aims to recognize facial images efficiently. An averaged-feature based technique is proposed to reduce the dimensions of the multi-expression facial features. The classifier model is generated using a supervised learning algorithm called a back-propagation neural network (BPNN), implemented on a MatLab R2017. The recognition rate and accuracy of the proposed methodology is comparable with other methods such as the principle component analysis and linear discriminant analysis with the same data set. In total, 150 faces subjects are selected from the Olivetti Research Laboratory (ORL) data set, resulting 95.6 and 85 per cent recognition rate and accuracy, respectively, and 165 faces subjects from the Yale data set, resulting 95.5 and 84.4 per cent recognition rate and accuracy, respectively.

Design/methodology/approach

Averaged-feature based approach (dimension reduction) and BPNN (generate supervised classifier).

Findings

The recognition rate is 95.6 per cent and recognition accuracy is 85 per cent for the ORL data set, whereas the recognition rate is 95.5 per cent and recognition accuracy is 84.4 per cent for the Yale data set.

Originality/value

Averaged-feature based method.

Details

Information and Learning Science, vol. 119 no. 9/10
Type: Research Article
ISSN: 2398-5348

Keywords

1 – 10 of over 12000