The object of this empirical research study is emotion, as depicted and aroused in videos. This paper seeks to answer the questions: Are users able to index such emotions consistently? Are the users' votes usable for emotional video retrieval?
The authors worked with a controlled vocabulary for nine basic emotions (love, happiness, fun, surprise, desire, sadness, anger, disgust and fear), a slide control for adjusting the emotions' intensity, and the approach of broad folksonomies. Different users tagged the same videos. The test persons had the task of indexing the emotions of 20 videos (reprocessed clips from YouTube). The authors distinguished between emotions which were depicted in the video and those that were evoked in the user. Data were received from 776 participants and a total of 279,360 slide control values were analyzed.
The consistency of the users' votes is very high; the tag distributions for the particular videos' emotions are stable. The final shape of the distributions will be reached by the tagging activities of only very few users (less than 100). By applying the approach of power tags it is possible to separate the pivotal emotions of every document – if indeed there is any feeling at all.
This paper is one of the first steps in the new research area of emotional information retrieval (EmIR). To the authors' knowledge, it is the first research project into the collective indexing of emotions in videos.
CitationDownload as .RIS
Emerald Group Publishing Limited
Copyright © 2011, Emerald Group Publishing Limited