Search results

11 – 20 of 571
Article
Publication date: 27 August 2019

Min Hao, Guangyuan Liu, Desheng Xie, Ming Ye and Jing Cai

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans respond to…

Abstract

Purpose

Happiness is an important mental emotion and yet becoming a major health concern nowadays. For this reason, better recognizing the objective understanding of how humans respond to event-related observations in their daily lives is especially important.

Design/methodology/approach

This paper uses non-intrusive technology (hyperspectral imaging [HSI]) for happiness recognition. Experimental setup is conducted for data collection in real-life environments where observers are showing spontaneous expressions of emotions (calm, happy, unhappy: angry) during the experimental process. Based on facial imaging captured from HSI, this work collects our emotional database defined as SWU Happiness DB and studies whether the physiological signal (i.e. tissue oxygen saturation [StO2], obtained by an optical absorption model) can be used to recognize observer happiness automatically. It proposes a novel method to capture local dynamic patterns (LDP) in facial regions, introducing local variations in facial StO2 to fully use physiological characteristics with regard to hyperspectral patterns. Further, it applies a linear discriminant analysis-based support vector machine to recognize happiness patterns.

Findings

The results show that the best classification accuracy is 97.89 per cent, objectively demonstrating a feasible application of LDP features on happiness recognition.

Originality/value

This paper proposes a novel feature (i.e. LDP) to represent the local variations in facial StO2 for modeling the active happiness. It provides a possible extension to the promising practical application.

Details

Engineering Computations, vol. 37 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 6 September 2018

Ihab Zaqout and Mones Al-Hanjori

The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to…

Abstract

Purpose

The face recognition problem has a long history and a significant practical perspective and one of the practical applications of the theory of pattern recognition, to automatically localize the face in the image and, if necessary, identify the person in the face. Interests in the procedures underlying the process of localization and individual’s recognition are quite significant in connection with the variety of their practical application in such areas as security systems, verification, forensic expertise, teleconferences, computer games, etc. This paper aims to recognize facial images efficiently. An averaged-feature based technique is proposed to reduce the dimensions of the multi-expression facial features. The classifier model is generated using a supervised learning algorithm called a back-propagation neural network (BPNN), implemented on a MatLab R2017. The recognition rate and accuracy of the proposed methodology is comparable with other methods such as the principle component analysis and linear discriminant analysis with the same data set. In total, 150 faces subjects are selected from the Olivetti Research Laboratory (ORL) data set, resulting 95.6 and 85 per cent recognition rate and accuracy, respectively, and 165 faces subjects from the Yale data set, resulting 95.5 and 84.4 per cent recognition rate and accuracy, respectively.

Design/methodology/approach

Averaged-feature based approach (dimension reduction) and BPNN (generate supervised classifier).

Findings

The recognition rate is 95.6 per cent and recognition accuracy is 85 per cent for the ORL data set, whereas the recognition rate is 95.5 per cent and recognition accuracy is 84.4 per cent for the Yale data set.

Originality/value

Averaged-feature based method.

Details

Information and Learning Science, vol. 119 no. 9/10
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 19 April 2024

Serhat Yuksel, Hasan Dincer and Alexey Mikhaylov

This paper aims to market analysis on the base many factors. Market analysis must be done correctly to increase the efficiency of smart grid technologies. On the other hand, it is…

Abstract

Purpose

This paper aims to market analysis on the base many factors. Market analysis must be done correctly to increase the efficiency of smart grid technologies. On the other hand, it is not very possible for the company to make improvements for too many factors. The main reason for this is that businesses have constraints both financially and in terms of manpower. Therefore, a priority analysis is needed in which the most important factors affecting the effectiveness of the market analysis will be determined.

Design/methodology/approach

In this context, a new fuzzy decision-making model is generated. In this hybrid model, there are mainly two different parts. First, the indicators are weighted with quantum spherical fuzzy multi SWARA (M-SWARA) methodology. On the other side, smart grid technology investment projects are examined by quantum spherical fuzzy ELECTRE. Additionally, facial expressions of the experts are also considered in this process.

Findings

The main contribution of the study is that a new methodology with the name of M-SWARA is generated by making improvements to the classical SWARA. The findings indicate that data-driven decisions play the most critical role in the effectiveness of market environment analysis for smart technology investments. To achieve success in this process, large-scale data sets need to be collected and analyzed. In this context, if the technology is strong, this process can be sustained quickly and effectively.

Originality/value

It is also identified that personalized energy schedule with smart meters is the most essential smart grid technology investment alternative. Smart meters provide data on energy consumption in real time.

Details

International Journal of Innovation Science, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-2223

Keywords

Abstract

Details

The Future of Recruitment
Type: Book
ISBN: 978-1-83867-562-2

Article
Publication date: 30 July 2018

Marzieh Yari Zanganeh and Nadjla Hariri

The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.

1408

Abstract

Purpose

The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.

Design/methodology/approach

From the methodological perspective, the present study is experimental and the type of study is practical. The study population is PhD students of various fields of science. The study sample consists of 50 students as selected by the stratified purposive sampling method. The information aggregation is performed by observing the records of user’s facial expressions, log file by Morae software, as well as pre-search and post-search questionnaire. The data analysis is performed by canonical correlation analysis.

Findings

The findings showed that there was a significant relationship between emotional expressions and searchers’ individual characteristics. Searchers satisfaction of results, frequency internet search, experience of search, interest in the search task and familiarity with similar searches were correlated with the increased happy emotion. The examination of user’s emotions during searching performance showed that users with happiness emotion dedicated much time in searching and viewing of search solutions. More internet addresses with more queries were used by happy participants; on the other hand, users with anger and disgust emotions had the lowest attempt in search performance to complete search process.

Practical implications

The results imply that the information retrieval systems in the web should identify emotional expressions in a set of perceiving signs in human interaction with computer, similarity, face emotional states, searching and information retrieval from the web.

Originality/value

The results explicit in the automatic identification of users’ emotional expressions can enter new dimensions into their moderator and information retrieval systems on the web and can pave the way of design of emotional information retrieval systems for the successful retrieval of users of the network.

Details

Online Information Review, vol. 42 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 January 2006

Qin Li, King Hong Cheung, Jane You, Raymond Tong and Arthur Mak

Aims to develop an efficient and robust system for real‐time personal identification by automatic face recognition.

Abstract

Purpose

Aims to develop an efficient and robust system for real‐time personal identification by automatic face recognition.

Design/methodology/approach

A wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme are introduced to improve the computation efficiency in the face detection task. In addition, a Gabor‐based low feature dimensional pattern is proposed to deal with the face recognition problem.

Findings

The proposal of a wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme is effective to improve the computation efficiency in the face detection task. The introduction of a low feature dimensional pattern is powerful to cope with the transformed appearance‐based face recognition problem. In addition, the use of aggregated Gabor filter responses to represent face images provides a better solution to face feature extraction.

Research limitations/implications

Provides guidance in the design of automatic face recognition system for real‐time personal identification.

Practical implications

Biometrics recognition has been emerging as a new and effective identification technology that attains certain level of maturity. Among many body characteristics that have been used, face is one of the most commonly used characteristics and has drawn considerably large attentions. An automated system to confirm an individual's identity employing features of face is very attractive in many specialized fields.

Originality/value

Introduces a wavelet‐based image hierarchy and a guided coarse‐to‐fine search scheme to improve the computation efficiency in the face detection task. Introduces a Gabor‐based low feature dimensional pattern to deal with the face recognition problem.

Details

Sensor Review, vol. 26 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 9 September 2014

Benjamin Wulff, Alexander Fecke, Lisa Rupp and Kai-Christoph Hamborg

The purpose of this work is to present a prototype of the system and the results from a technical evaluation and a study on possible effects of recordings with active camera…

Abstract

Purpose

The purpose of this work is to present a prototype of the system and the results from a technical evaluation and a study on possible effects of recordings with active camera control on the learner. An increasing number of higher education institutions have adopted the lecture recording technology in the past decade. Even though some solutions already show a very high degree of automation, active camera control can still only be realized with the use of human labor. Aiming to fill this gap, the LectureSight project is developing a free solution for active autonomous camera control for presentation recordings. The system uses a monocular overview camera to analyze the scene. Adopters can formulate camera control strategies in a simple scripting language to adjust the system’s behavior to the specific characteristics of a presentation site.

Design/methodology/approach

The system is based on a highly modularized architecture to make it easily extendible. The prototype has been tested in a seminar room and a large lecture hall. Furthermore, a study was conducted in which students from two universities prepared for a simulated exam with an ordinary lecture recording and a recording produced with the LectureSight technology.

Findings

The technical evaluation showed a good performance of the prototype but also revealed some technical constraints. The results of the psychological study give evidence that the learner might benefit from lecture videos in which the camera follows the presenter so that gestures and facial expression are easily perceptible.

Originality/value

The LectureSight project is the first open-source initiative to care about the topic of camera control for presentation recordings. This opens way for other projects building upon the LectureSight architecture. The simulated exam study gave evidence of a beneficial effect on students learning success and needs to be reproduced. Also, if the effect is proven to be consistent, the mechanism behind it is worth to be investigated further.

Details

Interactive Technology and Smart Education, vol. 11 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Book part
Publication date: 30 September 2020

Gulpreet Kaur Chadha, Seema Rawat and Praveen Kumar

In this chapter, the problem of facial palsy has been addressed. Facial palsy is a term used for disruption of facial muscles and could result in temporary or permanent damage of…

Abstract

In this chapter, the problem of facial palsy has been addressed. Facial palsy is a term used for disruption of facial muscles and could result in temporary or permanent damage of the facial nerve. Patients suffering from facial palsy have issues in doing normal day-to-day activities like eating, drinking, talking, and face psychosocial distress because of their physical appearance. To diagnose and treat facial palsy, the first step is to determine the level of facial paralysis that has affected the patient. This is the most important and challenging step. The research done here proposes how quantitative technology can be used to automate the process of diagnosing the degree of facial paralysis in a fast and efficient way.

Details

Big Data Analytics and Intelligence: A Perspective for Health Care
Type: Book
ISBN: 978-1-83909-099-8

Keywords

Book part
Publication date: 20 September 2018

Arthur C. Graesser, Nia Dowell, Andrew J. Hampton, Anne M. Lippert, Haiying Li and David Williamson Shaffer

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the…

Abstract

This chapter describes how conversational computer agents have been used in collaborative problem-solving environments. These agent-based systems are designed to (a) assess the students’ knowledge, skills, actions, and various other psychological states on the basis of the students’ actions and the conversational interactions, (b) generate discourse moves that are sensitive to the psychological states and the problem states, and (c) advance a solution to the problem. We describe how this was accomplished in the Programme for International Student Assessment (PISA) for Collaborative Problem Solving (CPS) in 2015. In the PISA CPS 2015 assessment, a single human test taker (15-year-old student) interacts with one, two, or three agents that stage a series of assessment episodes. This chapter proposes that this PISA framework could be extended to accommodate more open-ended natural language interaction for those languages that have developed technologies for automated computational linguistics and discourse. Two examples support this suggestion, with associated relevant empirical support. First, there is AutoTutor, an agent that collaboratively helps the student answer difficult questions and solve problems. Second, there is CPS in the context of a multi-party simulation called Land Science in which the system tracks progress and knowledge states of small groups of 3–4 students. Human mentors or computer agents prompt them to perform actions and exchange open-ended chat in a collaborative learning and problem-solving environment.

Details

Building Intelligent Tutoring Systems for Teams
Type: Book
ISBN: 978-1-78754-474-1

Keywords

Article
Publication date: 31 May 2004

Philip Brey

This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance…

6104

Abstract

This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporation will be analyzed, as well as its use in “Smart” video surveillance (CCTV) systems in city centers and airports. The ethical analysis will be based on a careful analysis of current facial recognition technology, of its use in Smart CCTV systems, and of the arguments used by proponents and opponents of such systems. It will be argued that Smart CCTV, which integrates video surveillance technology and biometric technology, faces ethical problems of error, function creep and privacy. In a concluding section on policy, it will be discussed whether such problems outweigh the security value of Smart CCTV in public places.

Details

Journal of Information, Communication and Ethics in Society, vol. 2 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

11 – 20 of 571