Search results

1 – 10 of 686
Open Access
Article
Publication date: 29 September 2022

Manju Priya Arthanarisamy Ramaswamy and Suja Palaniswamy

The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG)…

1056

Abstract

Purpose

The aim of this study is to investigate subject independent emotion recognition capabilities of EEG and peripheral physiological signals namely: electroocoulogram (EOG), electromyography (EMG), electrodermal activity (EDA), temperature, plethysmograph and respiration. The experiments are conducted on both modalities independently and in combination. This study arranges the physiological signals in order based on the prediction accuracy obtained on test data using time and frequency domain features.

Design/methodology/approach

DEAP dataset is used in this experiment. Time and frequency domain features of EEG and physiological signals are extracted, followed by correlation-based feature selection. Classifiers namely – Naïve Bayes, logistic regression, linear discriminant analysis, quadratic discriminant analysis, logit boost and stacking are trained on the selected features. Based on the performance of the classifiers on the test set, the best modality for each dimension of emotion is identified.

Findings

 The experimental results with EEG as one modality and all physiological signals as another modality indicate that EEG signals are better at arousal prediction compared to physiological signals by 7.18%, while physiological signals are better at valence prediction compared to EEG signals by 3.51%. The valence prediction accuracy of EOG is superior to zygomaticus electromyography (zEMG) and EDA by 1.75% at the cost of higher number of electrodes. This paper concludes that valence can be measured from the eyes (EOG) while arousal can be measured from the changes in blood volume (plethysmograph). The sorted order of physiological signals based on arousal prediction accuracy is plethysmograph, EOG (hEOG + vEOG), vEOG, hEOG, zEMG, tEMG, temperature, EMG (tEMG + zEMG), respiration, EDA, while based on valence prediction accuracy the sorted order is EOG (hEOG + vEOG), EDA, zEMG, hEOG, respiration, tEMG, vEOG, EMG (tEMG + zEMG), temperature and plethysmograph.

Originality/value

Many of the emotion recognition studies in literature are subject dependent and the limited subject independent emotion recognition studies in the literature report an average of leave one subject out (LOSO) validation result as accuracy. The work reported in this paper sets the baseline for subject independent emotion recognition using DEAP dataset by clearly specifying the subjects used in training and test set. In addition, this work specifies the cut-off score used to classify the scale as low or high in arousal and valence dimensions. Generally, statistical features are used for emotion recognition using physiological signals as a modality, whereas in this work, time and frequency domain features of physiological signals and EEG are used. This paper concludes that valence can be identified from EOG while arousal can be predicted from plethysmograph.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 12 June 2017

Lichao Zhu, Hangzhou Yang and Zhijun Yan

The purpose of this paper is to develop a new method to extract medical temporal information from online health communities.

Abstract

Purpose

The purpose of this paper is to develop a new method to extract medical temporal information from online health communities.

Design/methodology/approach

The authors trained a conditional random-filed model for the extraction of temporal expressions. The temporal relation identification is considered as a classification task and several support vector machine classifiers are built in the proposed method. For the model training, the authors extracted some high-level semantic features including co-reference relationship of medical concepts and the semantic similarity among words.

Findings

For the extraction of TIMEX, the authors find that well-formatted expressions are easy to recognize, and the main challenge is the relative TIMEX such as “three days after onset”. It also shows the same difficulty for normalization of absolute date or well-formatted duration, whereas frequency is easier to be normalized. For the identification of DocTimeRel, the result is fairly well, and the relation is difficult to identify when it involves a relative TIMEX or a hypothetical concept.

Originality/value

The authors proposed a new method to extract temporal information from the online clinical data and evaluated the usefulness of different level of syntactic features in this task.

Details

International Journal of Crowd Science, vol. 1 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 4 October 2022

Dhong Fhel K. Gom-os and Kelvin Y. Yong

The goal of this study is to test the real-world use of an emotion recognition system.

1298

Abstract

Purpose

The goal of this study is to test the real-world use of an emotion recognition system.

Design/methodology/approach

The researchers chose an existing algorithm that displayed high accuracy and speed. Four emotions: happy, sadness, anger and surprise, are used from six of the universal emotions, associated by their own mood markers. The mood-matrix interface is then coded as a web application. Four guidance counselors and 10 students participated in the testing of the mood-matrix. Guidance counselors answered the technology acceptance model (TAM) to assess its usefulness, and the students answered the general comfort questionnaire (GCQ) to assess their comfort levels.

Findings

Results from TAM found that the mood-matrix has significant use for the guidance counselors and the GCQ finds that the students were comfortable during testing.

Originality/value

No study yet has tested an emotion recognition system applied to counseling or any mental health or psychological transactions.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1076

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Content available
Article
Publication date: 12 May 2021

Jiyou Jia and Yunfan He

The purpose of this study is to design and implement an intelligent online proctoring system (IOPS) by using the advantage of artificial intelligence technology in order to…

1781

Abstract

Purpose

The purpose of this study is to design and implement an intelligent online proctoring system (IOPS) by using the advantage of artificial intelligence technology in order to monitor the online exam, which is urgently needed in online learning settings worldwide. As a pilot application, the authors used this system in an authentic university online exam and checked the proctoring result.

Design/methodology/approach

The IOPS adopts the B/S (Browser/Server) architecture. The server side is implemented with programming language C and Python and stores the identification data of all examinees and their important behavior change status, including facial expression, eye and mouth movement and speech. The browser side collects and analyzes multimodal data of the examinee writing the online test locally and transfers the examinee’s most important behavior status change data to the server. Real-time face recognition and voice detection are implemented with the support of open-source software.

Findings

The system was integrated into a Web-based intelligent tutoring system for school mathematics education. As a pilot application, the system was also used for online proctored exam in an undergraduate seminar in Peking University during the epidemic period in 2020. The recorded log data show that all students concentrated themselves on the exam and did not leave the camera and did not speak.

Originality/value

During the epidemic period of the novel coronavirus outbreak, almost all educational institutions in the world use online learning as the best way to maintain the teaching and learning schedule for all students. However, current online instruction platforms lack the function to prevent the learners from cheating in online exams and cannot guarantee the integrity and equality for all examinees as in traditional classroom exams. The literature review shows that the online proctoring system should become an important component of online exams to tackle the growing online cheating problem. Although such proctoring systems have been developed and put on the market, the practical usage of such systems in authentic exams and its effect have not been reported. Those systems are heavyweight and commercial product and cannot be freely used in education. The light-weight IOPS developed by the authors could meet the need for online exam as a stable and practical approach and could contribute to the growing online learning and distance learning.

Details

Interactive Technology and Smart Education, vol. 19 no. 1
Type: Research Article
ISSN: 1741-5659

Keywords

Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

2293

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Open Access
Article
Publication date: 17 May 2022

M'hamed Bilal Abidine, Mourad Oussalah, Belkacem Fergani and Hakim Lounis

Mobile phone-based human activity recognition (HAR) consists of inferring user’s activity type from the analysis of the inertial mobile sensor data. This paper aims to mainly…

Abstract

Purpose

Mobile phone-based human activity recognition (HAR) consists of inferring user’s activity type from the analysis of the inertial mobile sensor data. This paper aims to mainly introduce a new classification approach called adaptive k-nearest neighbors (AKNN) for intelligent HAR using smartphone inertial sensors with a potential real-time implementation on smartphone platform.

Design/methodology/approach

The proposed method puts forward several modification on AKNN baseline by using kernel discriminant analysis for feature reduction and hybridizing weighted support vector machines and KNN to tackle imbalanced class data set.

Findings

Extensive experiments on a five large scale daily activity recognition data set have been performed to demonstrate the effectiveness of the method in terms of error rate, recall, precision, F1-score and computational/memory resources, with several comparison with state-of-the art methods and other hybridization modes. The results showed that the proposed method can achieve more than 50% improvement in error rate metric and up to 5.6% in F1-score. The training phase is also shown to be reduced by a factor of six compared to baseline, which provides solid assets for smartphone implementation.

Practical implications

This work builds a bridge to already growing work in machine learning related to learning with small data set. Besides, the availability of systems that are able to perform on flight activity recognition on smartphone will have a significant impact in the field of pervasive health care, supporting a variety of practical applications such as elderly care, ambient assisted living and remote monitoring.

Originality/value

The purpose of this study is to build and test an accurate offline model by using only a compact training data that can reduce the computational and memory complexity of the system. This provides grounds for developing new innovative hybridization modes in the context of daily activity recognition and smartphone-based implementation. This study demonstrates that the new AKNN is able to classify the data without any training step because it does not use any model for fitting and only uses memory resources to store the corresponding support vectors.

Details

Sensor Review, vol. 42 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 26 August 2021

Shruti Garg, Rahul Kumar Patro, Soumyajit Behera, Neha Prerna Tigga and Ranjita Pandey

The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.

3197

Abstract

Purpose

The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.

Design/methodology/approach

Classical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.

Findings

The two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.

Originality/value

The present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Book part
Publication date: 1 October 2018

Jenny Lindholm, Klas Backholm and Joachim Högväg

Technical solutions can be important when key communicators take on the task of making sense of social media flows during crises. However, to provide situation awareness during…

Abstract

Technical solutions can be important when key communicators take on the task of making sense of social media flows during crises. However, to provide situation awareness during high-stress assignments, usability problems must be identified and corrected. In usability studies, where researchers investigate the user-friendliness of a product, several types of data gathering methods can be combined. Methods may include subjective (surveys and observations) and psychophysiological (e.g. skin conductance and eye tracking) data collection. This chapter mainly focuses on how the latter type can provide detailed clues about user-friendliness. Results from two studies are summarised. The tool tested is intended to help communicators and journalists with monitoring and handling social media content during times of crises.

Details

Social Media Use in Crisis and Risk Communication
Type: Book
ISBN: 978-1-78756-269-1

Keywords

1 – 10 of 686