Search results

1 – 10 of over 20000
Article
Publication date: 1 August 2006

Amanda Spink, Bernard J. Jansen, Vinish Kathuria and Sherry Koshman

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure the…

2478

Abstract

Purpose

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure the overlap across three major web search engines on the first results page overlap (i.e. share the same results) and the differences across a wide range of user defined search terms; determine the differences in the first page of search results and their rankings (each web search engine's view of the most relevant content) across single‐source web search engines, including both sponsored and non‐sponsored results; and measure the degree to which a meta‐search web engine, such as Dogpile.com, provides searchers with the most highly ranked search results from three major single source web search engines.

Design/methodology/approach

The authors collected 10,316 random Dogpile.com queries and ran an overlap algorithm using the URL for each result by query. The overlap of first result page search for each query was then summarized across all 10,316 to determine the overall overlap metrics. For a given query, the URL of each result for each engine was retrieved from the database.

Findings

The percent of total results unique retrieved by only one of the three major web search engines was 85 percent, retrieved by two web search engines was 12 percent, and retrieved by all three web search engines was 3 percent. This small level of overlap reflects major differences in web search engines retrieval and ranking results.

Research limitations/implications

This study provides an important contribution to the web research literature. The findings point to the value of meta‐search engines in web retrieval to overcome the biases of single search engines.

Practical implications

The results of this research can inform people and organizations that seek to use the web as part of their information seeking efforts, and the design of web search engines.

Originality/value

This research is a large investigation into web search engine overlap using real data from a major web meta‐search engine and single web search engines that sheds light on the uniqueness of top results retrieved by web search engines.

Details

Internet Research, vol. 16 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 7 July 2011

Dirk Lewandowski

The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages.

4749

Abstract

Purpose

The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages.

Design/methodology/approach

In total, 100 user queries are posed to six search engines (Google, Yahoo!, MSN, Ask, Seekport, and Exalead). Users described the desired pages, and the results position of these was recorded. Measured success and mean reciprocal rank are calculated.

Findings

The performance of the major search engines Google, Yahoo!, and MSN was found to be the best, with around 90 per cent of queries answered correctly. Ask and Exalead performed worse but received good scores as well.

Research limitations/implications

All queries were in German, and the German‐language interfaces of the search engines were used. Therefore, the results are only valid for German queries.

Practical implications

When designing a search engine to compete with the major search engines, care should be taken on the performance on navigational queries. Users can be influenced easily in their quality ratings of search engines based on this performance.

Originality/value

This study systematically compares the major search engines on navigational queries and compares the findings with studies on the retrieval effectiveness of the engines on informational queries.

Article
Publication date: 20 February 2007

Mary L. Robinson and Judith Wusteman

To describe a small‐scale quantitative evaluation of the scholarly information search engine, Google Scholar.

1460

Abstract

Purpose

To describe a small‐scale quantitative evaluation of the scholarly information search engine, Google Scholar.

Design/methodology/approach

Google Scholar's ability to retrieve scholarly information was compared to that of three popular search engines: Ask.com, Google and Yahoo! Test queries were presented to all four search engines and the following measures were used to compare them: precision; Vaughan's Quality of Result Ranking; relative recall; and Vaughan's Ability to Retrieve Top Ranked Pages.

Findings

Significant differences were found in the ability to retrieve top ranked pages between Ask.com and Google and between Ask.com and Google Scholar for scientific queries. No other significant differences were found between the search engines. This may be due to the relatively small sample size of eight queries. Results suggest that, for scientific queries, Google Scholar has the highest precision, relative recall and Ability to Retrieve Top Ranked Pages. However, it achieved the lowest score for these three measures for non‐scientific queries. The best overall score for all four measures was achieved by Google. Vaughan's Quality of Result Ranking found a significant correlation between Google and scientific queries.

Research limitations/implications

As with any search engine evaluation, the results pertain only to performance at the time of the study and must be considered in light of any subsequent changes in the search engine's configuration or functioning. Also, the relatively small sample size limits the scope of the study's findings.

Practical implications

These results suggest that, although Google Scholar may prove useful to those in scientific disciplines, further development is necessary if it is to be useful to the scholarly community in general.

Originality/value

This is a preliminary study in applying the accepted performance measures of precision and recall to Google Scholar. It provides information specialists and users with an objective evaluation of Google Scholar's abilities across both scientific and non‐scientific disciplines and paves the way for a larger study.

Details

Program, vol. 41 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 1 April 2003

Dion H. Goh and Rebecca P. Ang

Pay for performance (PFP) search engines provide search services for documents on the Web, but unlike traditional search engines, they rank documents not on content…

1034

Abstract

Pay for performance (PFP) search engines provide search services for documents on the Web, but unlike traditional search engines, they rank documents not on content characteristics, but according to the amount of money the owner of a Web site is willing to pay if a user visits the Web site through the search results pages. A study was conducted to compare the retrieval effectiveness of Overture (a PFP search engine) and Google (a traditional search engine) using a test suite of general knowledge questions. A total of 45 queries, based on a popular game show, “Who wants to be a millionaire?”, were submitted to each of these search engines and the first ten documents returned were analysed using different relevancy criteria. Results indicated that Google outperformed Overture in terms of precision and number of queries that could be answered. Implications for this study are also discussed.

Details

Online Information Review, vol. 27 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 4 April 2024

Artur Strzelecki

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current…

Abstract

Purpose

This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current form as complex technology-powered systems that offer a wide range of features and services.

Design/methodology/approach

In recent years, advancements in artificial intelligence (AI) technology have led to the development of AI-powered chat services. This study explores official announcements and releases of three major search engines, Google, Bing and Baidu, of AI-powered chat services.

Findings

Three major players in the search engine market, Google, Microsoft and Baidu started to integrate AI chat into their search results. Google has released Bard, later upgraded to Gemini, a LaMDA-powered conversational AI service. Microsoft has launched Bing Chat, renamed later to Copilot, a GPT-powered by OpenAI search engine. The largest search engine in China, Baidu, released a similar service called Ernie. There are also new AI-based search engines, which are briefly described.

Originality/value

This paper discusses the strengths and weaknesses of the traditional – algorithmic powered search engines and modern search with generative AI support, and the possibilities of merging them into one service. This study stresses the types of inquiries provided to search engines, users’ habits of using search engines and the technological advantage of search engine infrastructure.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 1 December 2005

Seda Ozmutlu

The purpose of this study is to investigate whether question and keyword‐format queries are more successfully processed by search engines encouraging answers to searching and…

1414

Abstract

Purpose

The purpose of this study is to investigate whether question and keyword‐format queries are more successfully processed by search engines encouraging answers to searching and keyword‐format querying, respectively. This study aims to investigate whether web user characteristics and choice of search engine affects the relevancy scores and precision of the results.

Design/methodology/approach

The results of two search engines, Google and AskJeeves, were compared for question and keyword‐format queries. It was observed that AskJeeves was slightly more successful in processing question‐format queries, but this finding was not statistically supported. However, Google provided results on keyword‐format queries and the entire set of queries, which were statistically superior to those of AskJeeves.

Findings

Analysis of variance (ANOVA) showed that the age of web user is not as affective on the relevancy score and precision of results as other factors. Interactions of the main factors were also affective on the relevancy scores and precision, meaning that the different combinations of various factors cause a synergy in terms of relevancy scores and precision.

Research limitations/implications

This was a preliminary work on the effect of user characteristics on comprehension and evaluation of search query results. Future work includes expanding this study to include more web user characteristics, more levels of the web user characteristics, and inclusion of more search engines.

Originality/value

The findings of this study provide statistical proof for the relationship between the characteristics of web users, choice of search engine and the relevancy scores and precision of search results.

Details

Online Information Review, vol. 29 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 29 November 2011

Francisco J. Lopez‐Pellicer, Aneta J. Florczyk, Rubén Béjar, Pedro R. Muro‐Medrano and F. Javier Zarazaga‐Soria

There is an open discussion in the geographic information community about the use of digital libraries or search engines for the discovery of resources. Some researchers suggest…

Abstract

Purpose

There is an open discussion in the geographic information community about the use of digital libraries or search engines for the discovery of resources. Some researchers suggest that search engines are a feasible alternative for searching geographic web services based on anecdotal evidence. The purpose of this study is to measure the performance of Bing (formerly Microsoft Live Search), Google and Yahoo! in searching standardised XML documents that describe, identify and locate geographic web services.

Design/methodology/approach

The study performed an automated evaluation of three search engines using their application programming interfaces. The queries asked for XML documents describing geographic web services, and documents containing links to those documents. Relevant XML documents linked from the documents found in the search results were also included in the evaluation.

Findings

The study reveals that the discovery of geographic web services in search engines does not require the use of advanced search operators. Data collected suggest that a resource‐oriented search should combine simple queries to search engines with the exploration of the pages linked from the search results. Finally the study identifies Yahoo! as the best performer.

Originality/value

This is the first study that measures and compares the performance of major search engines in the discovery of geographic web services. Previous studies were focused on demonstrating the technical feasibility of the approach. The paper also reveals that some technical advances in search engines could harm resource‐oriented queries.

Article
Publication date: 17 October 2008

Dirk Lewandowski

The purpose of this paper is to compare five major web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only…

2725

Abstract

Purpose

The purpose of this paper is to compare five major web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only the results, but also the results descriptions.

Design/methodology/approach

The study uses real‐life queries. Results are made anonymous and are randomized. Results are judged by the persons posing the original queries.

Findings

The two major search engines, Google and Yahoo, perform best, and there are no significant differences between them. Google delivers significantly more relevant result descriptions than any other search engine. This could be one reason for users perceiving this engine as superior.

Research limitations/implications

The study is based on a user model where the user takes into account a certain amount of results rather systematically. This may not be the case in real life.

Practical implications

The paper implies that search engines should focus on relevant descriptions. Searchers are advised to use other search engines in addition to Google.

Originality/value

This is the first major study comparing results and descriptions systematically and proposes new retrieval measures to take into account results descriptions.

Details

Journal of Documentation, vol. 64 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 29 November 2011

Hamid Sadeghi

The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally…

2068

Abstract

Purpose

The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally investigate which search engine among three popular search engines (Ask.com, Bing and Google) gives the best performance.

Design/methodology/approach

The study assesses the performance of three search engines. For each one the weighted average of similarity degrees between its ranked result list and those of its metasearch engines is measured. Next these measures are compared to establish which search engine gives the best performance. To compute the similarity degree between the lists two measures called the “tendency degree” and “coverage degree” are introduced; the former assesses a search engine in terms of results presentation and the latter evaluates it in terms of retrieval effectiveness. The performance of the search engines is experimentally assessed based on the 50 topics of the 2002 TREC web track. The effectiveness of the methods is also compared with human‐based ones.

Findings

Google outperformed the others, followed by Bing and Ask.com. Moreover significant degrees of consistency – 92.87 percent and 91.93 percent – were found between automatic and human‐based approaches.

Practical implications

The findings of this work could help users to select a truly effective search engine. The results also provide motivation for the vendors of web search engines to improve their technology.

Originality/value

The paper focuses on two novel automatic methods to evaluate the performance of search engines and provides valuable experimental results on three popular ones.

Details

Online Information Review, vol. 35 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 20000