Search results

1 – 10 of over 45000
Article
Publication date: 11 January 2013

Iris Xie and Edward Benoit

The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and…

1607

Abstract

Purpose

The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation.

Design/methodology/approach

The study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, evaluation activities and their associated pre/post activities, and time were analyzed based on open coding.

Findings

The study identifies the similarities and differences of list and document evaluation by analyzing 21 evaluation criteria applied, 13 evaluation elements examined, pre/post and evaluation activities performed and time spent. In addition, the authors also explored the time spent in evaluating lists and documents for different types of tasks.

Research limitations/implications

This study helps researchers understand the nature of list and document evaluation. Additionally, this study connects elements that participants examined to criteria they applied, and further reveals problems associated with the lack of integration between list and document evaluation. The findings of this study suggest more elements, especially at list level, be available to support users applying their evaluation criteria. Integration of list and document evaluation and integration of pre, evaluation and post evaluation activities for the interface design is the absolute solution for effective evaluation.

Originality/value

This study fills a gap in current research in relation to the comparison of list and document evaluation.

Book part
Publication date: 10 February 2012

Kin Fun Li, Yali Wang and Wei Yu

Purpose — To develop methodologies to evaluate search engines according to an individual's preference in an easy and reliable manner, and to formulate user-oriented metrics to…

Abstract

Purpose — To develop methodologies to evaluate search engines according to an individual's preference in an easy and reliable manner, and to formulate user-oriented metrics to compare freshness and duplication in search results.

Design/methodology/approach — A personalised evaluation model for comparing search engines is designed as a hierarchy of weighted parameters. These commonly found search engine features and performance measures are given quantitative and qualitative ratings by an individual user. Furthermore, three performance measurement metrics are formulated and presented as histograms for visual inspection. A methodology is introduced to quantitatively compare and recognise the different histogram patterns within the context of search engine performance.

Findings — Precision and recall are the fundamental measures used in many search engine evaluations due to their simplicity, fairness and reliability. Most recent evaluation models are user oriented and focus on relevance issues. Identifiable statistical patterns are found in performance measures of search engines.

Research limitations/implications — The specific parameters used in the evaluation model could be further refined. A larger scale user study would confirm the validity and usefulness of the model. The three performance measures presented give a reasonably informative overview of the characteristics of a search engine. However, additional performance parameters and their resulting statistical patterns would make the methodology more valuable to the users.

Practical implications — The easy-to-use personalised search engine evaluation model can be tailored to an individual's preference and needs simply by changing the weights and modifying the features considered. A user is able to get an idea of the characteristics of a search engine quickly using the quantitative measure of histogram patterns that represent the search performance metrics introduced.

Originality/value — The presented work is considered original as one of the first search engine evaluation models that can be personalised. This enables a Web searcher to choose an appropriate search engine for his/her needs and hence finding the right information in the shortest time with the least effort.

Book part
Publication date: 10 February 2012

Yvonne Kammerer and Peter Gerjets

Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine interfaces can…

Abstract

Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine interfaces can support Web users' credibility assessment of Web search results.

Design/methodology/approach — As theoretical background, Information Foraging Theory (Pirolli, 2007; Pirolli & Card, 1999) from cognitive science and Prominence-Interpretation-Theory (Fogg, 2003) from communication and persuasion research are presented. Furthermore, a range of recent empirical research that investigated the effects of alternative SERP layouts on searchers' information quality or credibility assessments of search results are reviewed and approaches that aim at automatically classifying search results according to specific genre categories are reported.

Findings — The chapter reports on findings that Web users often rely heavily on the ranking provided by the search engines without paying much attention to the reliability or trustworthiness of the Web pages. Furthermore, the chapter outlines how alternative search engine interfaces that display search results in a format different from a list and/or provide prominent quality-related cues in the SERPs can foster searchers' credibility evaluations.

Research limitations/implications — The reported empirical studies, search engine interfaces, and Web page classification systems are not an exhaustive list.

Originality/value — The chapter provides insights for researchers, search engine developers, educators, and students on how the development and use of alternative search engine interfaces might affect Web users' search and evaluation strategies during Web search as well as their search outcomes in terms of retrieving high-quality, credible information.

Article
Publication date: 29 November 2011

Hamid Sadeghi

The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally…

2068

Abstract

Purpose

The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally investigate which search engine among three popular search engines (Ask.com, Bing and Google) gives the best performance.

Design/methodology/approach

The study assesses the performance of three search engines. For each one the weighted average of similarity degrees between its ranked result list and those of its metasearch engines is measured. Next these measures are compared to establish which search engine gives the best performance. To compute the similarity degree between the lists two measures called the “tendency degree” and “coverage degree” are introduced; the former assesses a search engine in terms of results presentation and the latter evaluates it in terms of retrieval effectiveness. The performance of the search engines is experimentally assessed based on the 50 topics of the 2002 TREC web track. The effectiveness of the methods is also compared with human‐based ones.

Findings

Google outperformed the others, followed by Bing and Ask.com. Moreover significant degrees of consistency – 92.87 percent and 91.93 percent – were found between automatic and human‐based approaches.

Practical implications

The findings of this work could help users to select a truly effective search engine. The results also provide motivation for the vendors of web search engines to improve their technology.

Originality/value

The paper focuses on two novel automatic methods to evaluate the performance of search engines and provides valuable experimental results on three popular ones.

Details

Online Information Review, vol. 35 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 30 November 2010

Hsiao‐Tieh Pu

Clustering web search results into dynamic clusters and hierarchies provides a promising way to alleviate the overabundance of information typically found in ranked list search

Abstract

Purpose

Clustering web search results into dynamic clusters and hierarchies provides a promising way to alleviate the overabundance of information typically found in ranked list search engines. This study seeks to investigate the usefulness of clustering textual results in web search by analysing the search performance and users' satisfaction levels with and without the aid of clusters and hierarchies.

Design/methodology/approach

This study utilises two evaluation metrics. One is a usability test of clustering interfaces measured by users' search performances; the other is a comprehension test measured by users' satisfaction levels. Various methods were used to support the two tests, including experiments, observations, questionnaires, interviews, and search log analysis.

Findings

The results showed that there was no significant difference between the ranked list and clustering interfaces, although participants searched slightly faster, retrieved a larger number of relevant pages, and were more satisfied when using the ranked list interface without clustering. Even so, the clustering interface offers opportunities for diversified searching. Moreover, the repetitive ratio of relevant results found by each participant was low. Other advantages of the clustering interface are that it highlights important concepts and offers richer contexts for exploring, learning and discovering related concepts; however, it may induce a certain amount of anxiety about missing or losing important information.

Originality/value

The evaluation of a clustering interface is rather difficult, particularly in the context of the web search environment, which is used by a large heterogeneous user population for a wide variety of tasks. The study employed multiple data collection methods and in particular designed a combination of usability and comprehension tests to offer preliminary results on users' evaluation of real‐world clustering search interfaces. The results may extend the understanding of search characteristics with a cluster‐based web search engine, and could be used as a vehicle for further discussion of user evaluation research into this area.

Details

Online Information Review, vol. 34 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

Article
Publication date: 8 May 2017

Christiane Behnert and Dirk Lewandowski

The purpose of this paper is to demonstrate how to apply traditional information retrieval (IR) evaluation methods based on standards from the Text REtrieval Conference and web…

2052

Abstract

Purpose

The purpose of this paper is to demonstrate how to apply traditional information retrieval (IR) evaluation methods based on standards from the Text REtrieval Conference and web search evaluation to all types of modern library information systems (LISs) including online public access catalogues, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources.

Design/methodology/approach

The authors apply conventional procedures from IR evaluation to the LIS context considering the specific characteristics of modern library materials.

Findings

The authors introduce a framework consisting of five parts: search queries, search results, assessors, testing, and data analysis. The authors show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs printed monographs and what issues need to be considered for retrieval tests in the library context.

Practical implications

The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context.

Originality/value

Although a considerable amount of research has been done on IR evaluation, and standards for conducting retrieval effectiveness studies do exist, to the authors’ knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century LISs. The authors demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.

Article
Publication date: 12 April 2013

Orland Hoeber

HotMap web search was designed to support exploratory search tasks by adding lightweight visual and interactive features to the commonly used list‐based representation of web…

Abstract

Purpose

HotMap web search was designed to support exploratory search tasks by adding lightweight visual and interactive features to the commonly used list‐based representation of web search results. Although laboratory user studies are the most common method for empirically validating the utility of information visualization and information retrieval systems such as this, it is difficult to determine if such studies accurately reflect the tasks of real users. This paper aims to address these issues.

Design/methodology/approach

A longitudinal user evaluation was conducted in two phases over a ten‐week period to determine how this novel web search interface was being used and accepted in real‐world settings.

Findings

Although the interactive features were not used as extensively as expected, there is evidence that the participants did find them useful. Participants were able to refine their queries easily, although most did so manually. Those that used the interactive exploration features were able to effectively discover potentially relevant documents buried deep in the search results list. Subjective reactions regarding the usefulness and ease‐of‐use of the system were positive, and more than half of the participants continued to use the system even after the study ended.

Originality/value

As a result of conducting this longitudinal study, the author has gained a deeper understanding of how a particular visual and interactive web search interface is being used in the real world, as well as issues associated with resistance to change. These findings may provide guidance for the design, development, and study of next generation interfaces for online information retrieval.

Details

Online Information Review, vol. 37 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 22 November 2011

Atsushi Keyaki, Kenji Hatano and Jun Miyzaki

Nowadays there are a large number of XML documents on the web. This means that information retrieval techniques for searching XML documents are very important and necessary for…

Abstract

Purpose

Nowadays there are a large number of XML documents on the web. This means that information retrieval techniques for searching XML documents are very important and necessary for internet users. Moreover, it is often said that users of search engines want to browse only relevant content in each document. Therefore, an effective XML element search aims to produce only the relevant elements or portions of an XML document. Based on the demand by users, the purpose of this paper is to propose and evaluate a method for obtaining more accurate search results in XML search.

Design/methodology/approach

The existing approaches generate a ranked list in descending order of each XML element's relevance to a search query; however, these approaches often extract irrelevant XML elements and overlook more relevant elements. To address these problems, the authors' approach extracts the relevant XML elements by considering the size of the elements and the relationships between the elements. Next, the authors score the XML elements to generate a refined ranked list. For scoring, the authors rank high the XML elements that are the most relevant to the user's information needs. In particular, each XML element is scored using the statistics of its descendant and ancestor XML elements.

Findings

The experimental evaluations show that the proposed method outperforms BM25E, a conventional approach, which neither reconstructs XML elements nor uses descendant and ancestor statistics. As a result, the authors found that the accuracy of an XML element search can be improved by reconstructing the XML elements and emphasizing the informative ones by applying the statistics of the descendant XML elements.

Research limitations/implications

This work focused on the effectiveness of XML element search and the authors did not consider the search efficiency in this paper. One of the authors' next challenges is to reduce search time.

Originality/value

The paper proposes a method for improving the effectiveness of XML element search.

Article
Publication date: 30 August 2018

Yiming Zhao, Jin Zhang, Xue Xia and Taowen Le

The purpose of this paper is to evaluate Google question-answering (QA) quality.

2150

Abstract

Purpose

The purpose of this paper is to evaluate Google question-answering (QA) quality.

Design/methodology/approach

Given the large variety and complexity of Google answer boxes in search result pages, existing evaluation criteria for both search engines and QA systems seemed unsuitable. This study developed an evaluation criteria system for the evaluation of Google QA quality by coding and analyzing search results of questions from a representative question set. The study then evaluated Google’s overall QA quality as well as QA quality across four target types and across six question types, using the newly developed criteria system. ANOVA and Tukey tests were used to compare QA quality among different target types and question types.

Findings

It was found that Google provided significantly higher-quality answers to person-related questions than to thing-related, event-related and organization-related questions. Google also provided significantly higher-quality answers to where- questions than to who-, what- and how-questions. The more specific a question is, the higher the QA quality would be.

Research limitations/implications

Suggestions for both search engine users and designers are presented to help enhance user experience and QA quality.

Originality/value

Particularly suitable for search engine QA quality analysis, the newly developed evaluation criteria system expanded and enriched assessment metrics of both search engines and QA systems.

Details

Library Hi Tech, vol. 37 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 45000