Search results

1 – 10 of over 40000
To view the access options for this content please click here
Article
Publication date: 1 April 1995

TERRENCE A. BROOKS

This paper reports two experiments that investigated the semantic distance model (SDM) of relevance assessment. In the first experiment graduate students of mathematics…

Abstract

This paper reports two experiments that investigated the semantic distance model (SDM) of relevance assessment. In the first experiment graduate students of mathematics and economics assessed the relevance relationships between bibliographic records and hierarchies of terms composed of classification headings or help‐menu terms. The relevance assessments of the classification headings, but not the help‐menu terms, exhibited both a semantic distance effect and a semantic direction effect as predicted by the sdm. Topical subject expertise enhanced both these effects. The second experiment investigated whether the poor performance of the help‐menu terms was an experimental design artifact reflecting the comparison of terse help terms with verbose classification headings. In the second experiment the help‐menu terms were compared to a hierarchy of single‐word terms where they exhibited both a semantic distance and semantic direction effect.

Details

Journal of Documentation, vol. 51 no. 4
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article
Publication date: 4 October 2017

Kaitlin Light Costello

The purpose of this paper is to introduce the concept of social relevance assessments, which are judgments made by individuals when they seek out information within…

Abstract

Purpose

The purpose of this paper is to introduce the concept of social relevance assessments, which are judgments made by individuals when they seek out information within virtual social worlds such as online support groups (OSGs).

Design/methodology/approach

Constructivist grounded theory was employed to examine the phenomenon of information exchange in OSGs for chronic kidney disease. In-depth interviews were conducted with 12 participants, and their posts in three OSGs were also harvested. Data were analyzed using inductive content analysis and the constant comparative method. Theoretical sampling was conducted until saturation was reached. Member checking, peer debriefing, and triangulation were used to verify results.

Findings

There are two levels of relevance assessment that occur when people seek out information in OSGs. First, participants evaluate the OSG to determine whether or not the group is an appropriate place for information exchange about kidney disease. Second, participants evaluate individual users on the OSG to see if they are appropriate people with whom to exchange information. This often takes the form of similarity assessment, whereby people try to determine whether or not they are similar to specific individuals on the forums. They use a variety of heuristics to assess similarity as part of this process.

Originality/value

This paper extends the author’s understanding of relevance in information science in two fundamental ways. Within the context of social information exchange, relevance is socially constructed and is based on social characteristics, such as age, shared beliefs, and experience. Moreover, relevance is assessed both when participants seek out information and when they disclose information, suggesting that the conception of relevance as a process that occurs primarily during information seeking is limited.

Details

Journal of Documentation, vol. 73 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 1997

Pia Borlund and Peter Ingwersen

The paper describes the ideas and assumptions underlying the development of a new method for the evaluation and testing of interactive information retrieval (IR) systems…

Abstract

The paper describes the ideas and assumptions underlying the development of a new method for the evaluation and testing of interactive information retrieval (IR) systems, and reports on the initial tests of the proposed method. The method is designed to collect different types of empirical data, i.e. cognitive data as well as traditional systems performance data. The method is based on the novel concept of a ‘simulated work task situation’ or scenario and the involvement of real end users. The method is also based on a mixture of simulated and real information needs, and involves a group of test persons as well as assessments made by individual panel members. The relevance assessments are made with reference to the concepts of topical as well as situational relevance. The method takes into account the dynamic nature of information needs which are assumed to develop over time for the same user, a variability which is presumed to be strongly connected to the processes of relevance assessment.

Details

Journal of Documentation, vol. 53 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 7 October 2014

Ian Ruthven

The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author…

Abstract

Purpose

The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation.

Design/methodology/approach

Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance.

Findings

The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance.

Research limitations/implications

The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections.

Originality/value

The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.

Details

Journal of Documentation, vol. 70 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 2005

Preben Hansen and Jussi Karlgren

This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether…

Abstract

Purpose

This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work‐task scenario descriptions have effect on the assessment process.

Design/methodology/approach

Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first‐language speakers, fluent in English, were given simulated information‐seeking scenarios and presented with retrieval results in both languages. Twenty‐eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two‐level work‐task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process.

Findings

Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre‐assessed results is inferior to those made in the users' first language. Work‐task scenario descriptions had an effect on the assessment process, both by measured access time and by self‐report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects.

Originality/value

An extended two‐level work‐task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.

Details

Journal of Documentation, vol. 61 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 31 July 2007

Ian Ruthven, Mark Baillie and David Elsweiler

The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic…

Abstract

Purpose

The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant.

Design/methodology/approach

The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions.

Findings

This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant.

Research limitations/implications

The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study.

Practical implications

One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects.

Originality/value

Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.

Details

Journal of Documentation, vol. 63 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Book part
Publication date: 30 November 2006

Tefko Saracevic

In vol. 6, 1976, of Advances in Librarianship, I published a review about relevance under the same title, without, of course, “Part I” in the title (Saracevic, 1976). [A…

Abstract

In vol. 6, 1976, of Advances in Librarianship, I published a review about relevance under the same title, without, of course, “Part I” in the title (Saracevic, 1976). [A substantively similar article was published in the Journal of the American Society for Information Science (Saracevic, 1975)]. I did not plan then to have another related review 30 years later—but things happen. The 1976 work “attempted to trace the evolution of thinking on relevance, a key notion in information science, [and] to provide a framework within which the widely dissonant ideas on relevance might be interpreted and related to one another” (ibid.: 338).

Details

Advances in Librarianship
Type: Book
ISBN: 978-1-84950-007-4

To view the access options for this content please click here
Article
Publication date: 1 October 2000

Pertti Vakkari and Nanna Hakala

The objective of this study is to analyse how changes in relevance criteria are related to changes in problem stages during the task performance process. Relevance is…

Abstract

The objective of this study is to analyse how changes in relevance criteria are related to changes in problem stages during the task performance process. Relevance is understood as a task‐ and process‐oriented user construct. The assessment of relevance is based on both retrieved bibliographical information and the documents acquired and read on the basis of this information. The participants of the study were eleven students who attended a course for one term for preparing a research proposal for the master’s thesis. The students were asked to make an IR search at the beginning, middle and end of the course. Data for describing their understanding of the work task, search goals and tactics as well as relevance assessments were collected during the search sessions. Pre‐ and post‐search interviews were conducted during each session. The students were asked to think aloud during the search session. The transaction logs were captured and the thinking aloud was recorded. Research and search diaries were also collected. The findings support to a certain extent the overall hypotheses that a person’s problem stage during task performance is related to his or her use of relevance criteria in assessing retrieved references and documents. There is a connection between an individual’s changing understanding of his or her task and how the relevance of references and full texts is judged. The more structured the task in the process, the more able the person is to distinguish between relevant and other sources. The relevance criteria of documents changed more than the criteria of references during the process. Moreover, it seems that understanding of topicality varies depending on the phase of the process.

Details

Journal of Documentation, vol. 56 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1996

PETER INGWERSEN

The objective of the paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set…

Abstract

The objective of the paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full‐text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interpretations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. Polyrepresentation and the use of cognitive overlaps are associated with, but not identical to, data fusion in IR. By explicitly incorporating all the cognitive structures participating in the interactive communication processes during IR, the cognitive theory provides a comprehensive view of these processes. It encompasses the ad hoc theories of text retrieval and IR techniques hitherto developed in mainstream retrieval research. It has elements in common with van Rijsbergen and Lalmas' logical uncertainty theory and may be regarded as compatible with that conception of IR. Epistemologically speaking, the theory views IR interaction as processes of cognition, potentially occurring in all the information processing components of IR, that may be applied, in particular, to the user in a situational context. The theory draws upon basic empirical results from information seeking investigations in the operational online environment, and from mainstream IR research on partial matching techniques and relevance feedback. By viewing users, source systems, intermediary mechanisms and information in a global context, the cognitive perspective attempts a comprehensive understanding of essential IR phenomena and concepts, such as the nature of information needs, cognitive inconsistency and retrieval overlaps, logical uncertainty, the concept of ‘document’, relevance measures and experimental settings. An inescapable consequence of this approach is to rely more on sociological and psychological investigative methods when evaluating systems and to view relevance in IR as situational, relative, partial, differentiated and non‐linear. The lack of consistency among authors, indexers, evaluators or users is of an identical cognitive nature. It is unavoidable, and indeed favourable to IR. In particular, for full‐text retrieval, alternative semantic entities, including Salton et al.'s ‘passage retrieval’, are proposed to replace the traditional document record as the basic retrieval entity. These empirically observed phenomena of inconsistency and of semantic entities and values associated with data interpretation support strongly a cognitive approach to IR and the logical use of polyrepresentation, cognitive overlaps, and both data fusion and data diffusion.

Details

Journal of Documentation, vol. 52 no. 1
Type: Research Article
ISSN: 0022-0418

To view the access options for this content please click here
Article
Publication date: 8 May 2017

Christiane Behnert and Dirk Lewandowski

The purpose of this paper is to demonstrate how to apply traditional information retrieval (IR) evaluation methods based on standards from the Text REtrieval Conference…

Abstract

Purpose

The purpose of this paper is to demonstrate how to apply traditional information retrieval (IR) evaluation methods based on standards from the Text REtrieval Conference and web search evaluation to all types of modern library information systems (LISs) including online public access catalogues, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources.

Design/methodology/approach

The authors apply conventional procedures from IR evaluation to the LIS context considering the specific characteristics of modern library materials.

Findings

The authors introduce a framework consisting of five parts: search queries, search results, assessors, testing, and data analysis. The authors show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs printed monographs and what issues need to be considered for retrieval tests in the library context.

Practical implications

The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context.

Originality/value

Although a considerable amount of research has been done on IR evaluation, and standards for conducting retrieval effectiveness studies do exist, to the authors’ knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century LISs. The authors demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.

1 – 10 of over 40000