Search results

1 – 10 of over 70000
To view the access options for this content please click here
Book part
Publication date: 10 February 2012

Yvonne Kammerer and Peter Gerjets

Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine…

Abstract

Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine interfaces can support Web users' credibility assessment of Web search results.

Design/methodology/approach — As theoretical background, Information Foraging Theory (Pirolli, 2007; Pirolli & Card, 1999) from cognitive science and Prominence-Interpretation-Theory (Fogg, 2003) from communication and persuasion research are presented. Furthermore, a range of recent empirical research that investigated the effects of alternative SERP layouts on searchers' information quality or credibility assessments of search results are reviewed and approaches that aim at automatically classifying search results according to specific genre categories are reported.

Findings — The chapter reports on findings that Web users often rely heavily on the ranking provided by the search engines without paying much attention to the reliability or trustworthiness of the Web pages. Furthermore, the chapter outlines how alternative search engine interfaces that display search results in a format different from a list and/or provide prominent quality-related cues in the SERPs can foster searchers' credibility evaluations.

Research limitations/implications — The reported empirical studies, search engine interfaces, and Web page classification systems are not an exhaustive list.

Originality/value — The chapter provides insights for researchers, search engine developers, educators, and students on how the development and use of alternative search engine interfaces might affect Web users' search and evaluation strategies during Web search as well as their search outcomes in terms of retrieving high-quality, credible information.

To view the access options for this content please click here
Article
Publication date: 9 August 2011

Nadjla Hariri

The main purpose of this study is to evaluate the effectiveness of relevance ranking on Google by comparing the system's assessment of relevance with the users' views. The…

Downloads
3316

Abstract

Purpose

The main purpose of this study is to evaluate the effectiveness of relevance ranking on Google by comparing the system's assessment of relevance with the users' views. The research aims to find out whether the presumably objective relevance ranking of Google based on the PageRank and some other factors in fact matches users' subjective judgments of relevance.

Design/methodology/approach

This research investigated the relevance ranking of Google's retrieved results using 34 searches conducted by users in real search sessions. The results pages 1‐4 (i.e. the first 40 results) were examined by the users to identify relevant documents. Based on these data the frequency of relevant documents according to the appearance order of retrieved documents in the first four results pages was calculated. The four results pages were also compared in terms of precision.

Findings

In 50 per cent and 47.06 per cent of the searches the documents ranked 5th and 1st, (i.e. from the first pages of the retrieved results) respectively, were most relevant according to the users' viewpoints. Yet even in the fourth results pages there were three documents that were judged most relevant by the users in more than 40 per cent of the searches. There were no significant differences between the precision of the four results pages except between pages 1 and 3.

Practical implications

The results will help users of search engines, especially Google, to decide how many pages of the retrieved results to examine.

Originality/value

Search engine design will benefit from the results of this study as it experimentally evaluates the effectiveness of Google's relevance ranking.

Details

Online Information Review, vol. 35 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 1 September 2005

Lin‐Chih Chen and Cheng‐Jye Luh

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

Downloads
1216

Abstract

Purpose

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

Design/methodology/approach

The proposed design is based on the primacy effect of browsing behavior, that users prefer top ranking items in search results. This approach is intuitive and requires no training data at all.

Findings

A user study showed that users are more satisfied with the proposed search methods than with general search engines using hot keywords. Moreover, two performance measures confirmed that the proposed search methods out‐perform other metasearch and search engines.

Research limitations/implications

The research has limitations and future work is planned along several directions. First, the search methods implemented are primarily based on the keyword match between the contents of web pages and the user query items. Using the semantic web to recommend concepts and items relevant to the user query might be very helpful in finding the exact contents that users want, particularly when the users do not have enough knowledge about the domains in which they are searching. Second, offering a mechanism that groups search results to improve the way search results are segmented and displayed also assists users to locate the contents they need. Finally, more user feedback is needed to fine‐tune the search parameters including α and β to improve the performance.

Practical implications

The proposed model can be used to improve the search performance of any search engine.

Originality/value

First, compared with the democratic voting procedure used by metasearch engines, search engine vector voting (SVV) enables a specific combination of search parameters, denoted as α and β, to be applied to a voted search engine, so that users can either narrow or expand their search results to meet their search preferences. Second, unlike page quality analysis, the hyperlink prediction (HLP) determines qualified pages by simply measuring their user behavior function (UBF) values, and thus takes less computing power. Finally, the advantages of HLP over statistical analysis are that it does not need training data, and it can target both multi‐site and site‐specific analysis.

Details

Internet Research, vol. 15 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 August 2006

Amanda Spink, Bernard J. Jansen, Vinish Kathuria and Sherry Koshman

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure…

Downloads
2256

Abstract

Purpose

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure the overlap across three major web search engines on the first results page overlap (i.e. share the same results) and the differences across a wide range of user defined search terms; determine the differences in the first page of search results and their rankings (each web search engine's view of the most relevant content) across single‐source web search engines, including both sponsored and non‐sponsored results; and measure the degree to which a meta‐search web engine, such as Dogpile.com, provides searchers with the most highly ranked search results from three major single source web search engines.

Design/methodology/approach

The authors collected 10,316 random Dogpile.com queries and ran an overlap algorithm using the URL for each result by query. The overlap of first result page search for each query was then summarized across all 10,316 to determine the overall overlap metrics. For a given query, the URL of each result for each engine was retrieved from the database.

Findings

The percent of total results unique retrieved by only one of the three major web search engines was 85 percent, retrieved by two web search engines was 12 percent, and retrieved by all three web search engines was 3 percent. This small level of overlap reflects major differences in web search engines retrieval and ranking results.

Research limitations/implications

This study provides an important contribution to the web research literature. The findings point to the value of meta‐search engines in web retrieval to overcome the biases of single search engines.

Practical implications

The results of this research can inform people and organizations that seek to use the web as part of their information seeking efforts, and the design of web search engines.

Originality/value

This research is a large investigation into web search engine overlap using real data from a major web meta‐search engine and single web search engines that sheds light on the uniqueness of top results retrieved by web search engines.

Details

Internet Research, vol. 16 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 22 November 2011

Brent Wenerstrom and Mehmed Kantardzic

Search engine users are faced with long lists of search results, each entry being of a varying degree of relevance. Often users' expectations based on the short text of a…

Abstract

Purpose

Search engine users are faced with long lists of search results, each entry being of a varying degree of relevance. Often users' expectations based on the short text of a search result hold false expectations about the linked web page. This leads users to skip relevant information, missing valuable insights, and click on irrelevant web pages wasting time. The purpose of this paper is to propose a new summary generation technique, ReClose, which combines query‐independent and query‐biased summary techniques to improve the accuracy of users' expectations.

Design/methodology/approach

The authors tested the effectiveness of ReClose summaries against Google summaries by surveying 34 participants. Participants were randomly assigned to use one type of summary approach. Summary effectiveness was judged based on the accuracy of each user's expectations.

Findings

It was found that individuals using ReClose summaries showed a 10 per cent increase in the expectation accuracy over individuals using Google summaries, and therefore better user satisfaction.

Practical implications

The survey demonstrates the effectiveness of using ReClose summaries to improve the accuracy of user expectations.

Originality/value

This paper presents a novel summary generation technique called ReClose, a new approach to summary evaluation and improvements upon previously proposed summary generation techniques.

Details

International Journal of Web Information Systems, vol. 7 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 21 November 2008

Ola Ågren

The purpose of this paper is to assign topic‐specific ratings to web pages.

Downloads
1137

Abstract

Purpose

The purpose of this paper is to assign topic‐specific ratings to web pages.

Design/methodology/approach

The paper uses power iteration to assign topic‐specific rating values (called relevance) to web pages, creating a ranking or partial order among these pages for each topic. This approach depends on a set of pages that are initially assumed to be relevant for a specific topic; the spatial link structure of the web pages; and a net‐specific decay factor designated ξ.

Findings

The paper finds that this approach exhibits desirable properties such as fast convergence, stability and yields relevant answer sets. The first property will be shown using theoretical proofs, while the others are evaluated through stability experiments and assessments of real world data in comparison with already established algorithms.

Research limitations/implications

In the assessment, all pages that a web spider was able to find in the Nordic countries were used. It is also important to note that entities that use domains outside the Nordic countries (e.g..com or.org) are not present in the paper's datasets even though they reside logically within one or more of the Nordic countries. This is quite a large dataset, but still small in comparison with the entire worldwide web. Moreover, the execution speed of some of the algorithms unfortunately prohibited the use of a large test dataset in the stability tests.

Practical implications

It is not only possible, but also reasonable, to perform ranking of web pages without using Markov chain approaches. This means that the work of generating answer sets for complex questions could (at least in theory) be divided into smaller parts that are later summed up to give the final answer.

Originality/value

This paper contributes to the research on internet search engines.

Details

International Journal of Web Information Systems, vol. 4 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Book part
Publication date: 10 February 2012

Dirk Ahlers

Purpose — To provide a theoretical background to understand current local search engines as an aspect of specialized search, and understand the data sources and used…

Abstract

Purpose — To provide a theoretical background to understand current local search engines as an aspect of specialized search, and understand the data sources and used technologies.

Design/methodology/approach — Selected local search engines are examined and compared toward their use of geographic information retrieval (GIR) technologies, data sources, available entity information, processing, and interfaces. An introduction to the field of GIR is given and its use in the selected systems is discussed.

Findings — All selected commercial local search engines utilize GIR technology in varying degrees for information preparation and presentation. It is also starting to be used in regular Web search. However, major differences can be found between the different search engines.

Research limitations/implications — This study is not exhaustive and only uses informal comparisons without definitive ranking. Due to the unavailability of hard data, informed guesses were made based on available public interfaces and literature.

Practical implications — A source of background information for understanding the results of local search engines, their provenance, and their potential.

Originality/value — An overview of GIR technology in the context of commercial search engines integrates research efforts and commercial systems and helps to understand both sides better.

To view the access options for this content please click here
Article
Publication date: 1 April 2001

Mike Thelwall

Web impact factors, the proposed web equivalent of impact factors for journals, can be calculated by using search engines. It has been found that the results are…

Abstract

Web impact factors, the proposed web equivalent of impact factors for journals, can be calculated by using search engines. It has been found that the results are problematic because of the variable coverage of search engines as well as their ability to give significantly different results over short periods of time. The fundamental problem is that although some search engines provide a functionality that is capable of being used for impact calculations, this is not their primary task and therefore they do not give guarantees as to performance in this respect. In this paper, a bespoke web crawler designed specifically for the calculation of reliable WIFs is presented. This crawler was used to calculate WIFs for a number of UK universities, and the results of these calculations are discussed. The principal findings were that with certain restrictions, WIFs can be calculated reliably, but do not correlate with accepted research rankings owing to the variety of material hosted on university servers. Changes to the calculations to improve the fit of the results to research rankings are proposed, but there are still inherent problems undermining the reliability of the calculation. These problems still apply if the WIF scores are taken on their own as indicators of the general impact of any area of the Internet, but with care would not apply to online journals.

Details

Journal of Documentation, vol. 57 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 6 February 2007

Michael P. Evans

The purpose of this paper is to identify the most popular techniques used to rank a web page highly in Google.

Downloads
10653

Abstract

Purpose

The purpose of this paper is to identify the most popular techniques used to rank a web page highly in Google.

Design/methodology/approach

The paper presents the results of a study into 50 highly optimized web pages that were created as part of a Search Engine Optimization competition. The study focuses on the most popular techniques that were used to rank highest in this competition, and includes an analysis on the use of PageRank, number of pages, number of in‐links, domain age and the use of third party sites such as directories and social bookmarking sites. A separate study was made into 50 non‐optimized web pages for comparison.

Findings

The paper provides insight into the techniques that successful Search Engine Optimizers use to ensure a page ranks highly in Google. Recognizes the importance of PageRank and links as well as directories and social bookmarking sites.

Research limitations/implications

Only the top 50 web sites for a specific query were analyzed. Analysing more web sites and comparing with similar studies in different competition would provide more concrete results.

Practical implications

The paper offers a revealing insight into the techniques used by industry experts to rank highly in Google, and the success or otherwise of those techniques.

Originality/value

This paper fulfils an identified need for web sites and e‐commerce sites keen to attract a wider web audience.

Details

Internet Research, vol. 17 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2019

Dania Bilal and Li-Min Huang

The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between…

Downloads
1413

Abstract

Purpose

The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between Google and Bing.

Design/methodology/approach

The authors employed the Readability Test Tool to analyze the readability and word complexity of 3,000 SERPs snippets and 3,000 associated pages in Google and Bing retrieved on 150 search queries issued by middle school children.

Findings

A significant difference was found in the readability of SERPs snippets and associated web pages between Google and Bing. A significant difference was also observed in the number of complex words in snippets between the two engines but not in associated web pages. At the engine level, the readability of Google and Bing snippets was significantly higher than associated web pages. The readability of Google SERPs snippets was at a much higher level than those of Bing. The readability of snippets in both engines mismatched with the reading comprehension of children in grades 6–8.

Research limitations/implications

The data corpus may be small. Analysis relied on quantitative measures.

Practical implications

Practitioners and other mediators should mitigate the readability issue in SERPs snippets. Researchers should consider text readability and word complexity simultaneously with other factors to obtain the nuanced understanding of young users’ web information behaviors. Additional theoretical and methodological implications are discussed.

Originality/value

This study measured the readability and the level of word complexity embedded in SERPs snippets and compared them to respective web pages in Google and Bing. Findings provide further evidence of the readability issue of SERPs snippets and the need to solve this issue through system design improvements.

Details

Aslib Journal of Information Management, vol. 71 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 70000