Automated sentence‐level relevance and novelty detection would be of direct benefit to many information retrieval systems. However, the low level of agreement between human judges performing the task is an issue of concern. In previous approaches, annotators were asked to identify sentences in a document set that are relevant to a given topic, and then to eliminate sentences that do not provide novel information. This paper aims to explore a new approach in which relevance and novelty judgments are made within the context of specific, factual information needs, rather than with respect to a broad topic.
An experiment is conducted in which annotators perform the novelty detection task in both the topic‐focused and fact‐focused settings.
Higher levels of agreement between judges are found on the task of identifying relevant sentences in the fact‐focused approach. However, the new approach does not improve agreement on novelty judgments.
The analysis confirms the intuition that making sentence‐level relevance judgments is likely to be the more difficult of the two tasks in the novelty detection framework.
Emerald Group Publishing Limited
Copyright © 2008, Emerald Group Publishing Limited