The purpose of this paper is to show how previous studies have demonstrated that non‐professional users prefer using event‐based conceptual descriptions, such as “a woman wearing a hat”, to describe and search images. In many art image archives, these conceptual descriptions are manually annotated in free‐text fields. This study aims to explore technologies to automate event‐based knowledge extractions from these free‐text image descriptions.
This study presents an approach based on semantic role labeling technologies for automatically extracting event‐based knowledge, including subject, verb, object, location and temporal information from free‐text image descriptions. A query expansion module is applied to further improve the retrieval recall. The effectiveness of the proposed approach is evaluated by measuring the retrieval precision and recall capabilities for experiments with real life art image collections in museums.
Evaluations results indicate that the proposed method can achieve a substantially higher retrieval precision than conventional keyword‐based approaches. The proposed methodology is highly applicable for large‐scale collections where the image retrieval precision is more critical than the recall.
The study provides the first attempt in literature for automating the extraction of event‐based knowledge from free‐text image descriptions. The effectiveness and ease of implementation of the proposed approach make it feasible for practical applications.
Lin, C., Yen, C., Hong, J. and Cruz‐Lara, S. (2008), "Event‐based knowledge extraction from free‐text descriptions for art images by using semantic role labeling approaches", The Electronic Library, Vol. 26 No. 2, pp. 215-225. https://doi.org/10.1108/02640470810864109Download as .RIS
Emerald Group Publishing Limited
Copyright © 2008, Emerald Group Publishing Limited