Search results

1 – 10 of over 11000
To view the access options for this content please click here
Article
Publication date: 1 August 2006

Amanda Spink, Bernard J. Jansen, Vinish Kathuria and Sherry Koshman

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure…

Downloads
2271

Abstract

Purpose

This paper reports the findings of a major study examining the overlap among results retrieved by three major web search engines. The goal of the research was to: measure the overlap across three major web search engines on the first results page overlap (i.e. share the same results) and the differences across a wide range of user defined search terms; determine the differences in the first page of search results and their rankings (each web search engine's view of the most relevant content) across single‐source web search engines, including both sponsored and non‐sponsored results; and measure the degree to which a meta‐search web engine, such as Dogpile.com, provides searchers with the most highly ranked search results from three major single source web search engines.

Design/methodology/approach

The authors collected 10,316 random Dogpile.com queries and ran an overlap algorithm using the URL for each result by query. The overlap of first result page search for each query was then summarized across all 10,316 to determine the overall overlap metrics. For a given query, the URL of each result for each engine was retrieved from the database.

Findings

The percent of total results unique retrieved by only one of the three major web search engines was 85 percent, retrieved by two web search engines was 12 percent, and retrieved by all three web search engines was 3 percent. This small level of overlap reflects major differences in web search engines retrieval and ranking results.

Research limitations/implications

This study provides an important contribution to the web research literature. The findings point to the value of meta‐search engines in web retrieval to overcome the biases of single search engines.

Practical implications

The results of this research can inform people and organizations that seek to use the web as part of their information seeking efforts, and the design of web search engines.

Originality/value

This research is a large investigation into web search engine overlap using real data from a major web meta‐search engine and single web search engines that sheds light on the uniqueness of top results retrieved by web search engines.

Details

Internet Research, vol. 16 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2004

Alastair G. Smith

This paper explores resource discovery issues relating to New Zealand/Aotearoa information on the WWW in the twenty‐first century. Questions addressed are: How do New…

Downloads
608

Abstract

This paper explores resource discovery issues relating to New Zealand/Aotearoa information on the WWW in the twenty‐first century. Questions addressed are: How do New Zealand search engines compare with global search engines for finding information relating to New Zealand? Can search engines find everything that is available on the web? What are effective strategies for finding information relating to New Zealand on the web? What is the quality of NZ information on the web? What can librarians do to make NZ information more accessible on the web? Based on a study, it concludes that neither local nor global search engines are by themselves sufficient, and that to maximize retrieval a variety of engines is necessary. The NZ librarian can play a role in ensuring that NZ information is made both available and accessible. Although the paper discusses the situation in New Zealand, the results and conclusions are applicable to other countries.

Details

The Electronic Library, vol. 22 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 2 August 2013

Lourdes Moreno and Paloma Martinez

The purpose of this paper is to show that the pursuit of a high search engine relevance ranking for a webpage is not necessarily incompatible with the pursuit of web accessibility.

Downloads
3028

Abstract

Purpose

The purpose of this paper is to show that the pursuit of a high search engine relevance ranking for a webpage is not necessarily incompatible with the pursuit of web accessibility.

Design/methodology/approach

The research described arose from an investigation into the observed phenomenon that pages from accessible websites regularly appear near the top of search engine (such as Google) results, without any deliberate effort having been made through the application of search engine optimization (SEO) techniques to achieve this. The reasons for this phenomenon appear to be found in the numerous similarities and overlapping characteristics between SEO factors and web accessibility guidelines. Context is provided through a review of sources including accessibility standards and relevant SEO studies and the relationship between SEO and web accessibility is described. The particular overlapping factors between the two are identified and the precise nature of the overlaps is explained in greater detail.

Findings

The available literature provides firm evidence that the overlapping factors not only serve to ensure the accessibility of a website for all users, but are also useful for the optimization of the website's search engine ranking. The research demonstrates that any SEO project undertaken should include, as a prerequisite, the proper design of accessible web content, inasmuch as search engines will interpret the web accessibility achieved as an indicator of quality and will be able to better access and index the resulting web content.

Originality/value

The present study indicates how developing websites with high visibility in search engine results also makes their content more accessible.

Details

Online Information Review, vol. 37 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2005

Seda Ozmutlu

The purpose of this study is to investigate whether question and keyword‐format queries are more successfully processed by search engines encouraging answers to searching

Downloads
1376

Abstract

Purpose

The purpose of this study is to investigate whether question and keyword‐format queries are more successfully processed by search engines encouraging answers to searching and keyword‐format querying, respectively. This study aims to investigate whether web user characteristics and choice of search engine affects the relevancy scores and precision of the results.

Design/methodology/approach

The results of two search engines, Google and AskJeeves, were compared for question and keyword‐format queries. It was observed that AskJeeves was slightly more successful in processing question‐format queries, but this finding was not statistically supported. However, Google provided results on keyword‐format queries and the entire set of queries, which were statistically superior to those of AskJeeves.

Findings

Analysis of variance (ANOVA) showed that the age of web user is not as affective on the relevancy score and precision of results as other factors. Interactions of the main factors were also affective on the relevancy scores and precision, meaning that the different combinations of various factors cause a synergy in terms of relevancy scores and precision.

Research limitations/implications

This was a preliminary work on the effect of user characteristics on comprehension and evaluation of search query results. Future work includes expanding this study to include more web user characteristics, more levels of the web user characteristics, and inclusion of more search engines.

Originality/value

The findings of this study provide statistical proof for the relationship between the characteristics of web users, choice of search engine and the relevancy scores and precision of search results.

Details

Online Information Review, vol. 29 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 31 August 2012

Friederike Kerkmann and Dirk Lewandowski

The purpose of this paper is to describe the aspects to be considered when evaluating web search engines' accessibility for people with disabilities. The authors provide…

Downloads
1740

Abstract

Purpose

The purpose of this paper is to describe the aspects to be considered when evaluating web search engines' accessibility for people with disabilities. The authors provide an overview of related work and outline a theoretical framework for a comprehensive accessibility study of web search engines, regarding the principles of disability studies and the idea of inclusion.

Design/methodology/approach

The paper is based on a literature review, and an aggregation of recommended actions in practice, mainly the W3C Web Accessibility Initiative's (WAI) evaluation model.

Findings

A good way to conduct an accessibility study in a comprehensive manner is the WAI methodology consisting of three‐steps: preliminary review to quickly identify potential accessibility problems; conformance evaluation to determine whether a website meets established accessibility standards; and user testing to include real people with disabilities in a practical use. For the use case “web search engines” some special issues have to be taken into consideration.

Research limitations/implications

The paper can be seen as a brainstorming and describes a theoretical concept of how to do. Conclusions about actual barriers of web search engines and criteria of satisfaction for people with disabilities do not exist as of yet; the model is not tested so far.

Practical implications

This paper provides practical implications for researchers who want to conduct an accessibility study, especially of web search engines. Findings of such studies can have practical implications for web search engine developers to improve accessibility of their product. The accessibility of web search engines does not only have implications for people with special needs, but also for the elderly or temporarily handicapped people.

Originality/value

This paper combines findings from web search engine research with aspects of disability studies. Therefore, it provides insights for researches, search engine developers and educators in practice on how important accessibility of web search engines for people with disabilities is, how it can be measured and what aspects need to be considered.

To view the access options for this content please click here
Article
Publication date: 1 September 2005

Lin‐Chih Chen and Cheng‐Jye Luh

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

Downloads
1216

Abstract

Purpose

This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.

Design/methodology/approach

The proposed design is based on the primacy effect of browsing behavior, that users prefer top ranking items in search results. This approach is intuitive and requires no training data at all.

Findings

A user study showed that users are more satisfied with the proposed search methods than with general search engines using hot keywords. Moreover, two performance measures confirmed that the proposed search methods out‐perform other metasearch and search engines.

Research limitations/implications

The research has limitations and future work is planned along several directions. First, the search methods implemented are primarily based on the keyword match between the contents of web pages and the user query items. Using the semantic web to recommend concepts and items relevant to the user query might be very helpful in finding the exact contents that users want, particularly when the users do not have enough knowledge about the domains in which they are searching. Second, offering a mechanism that groups search results to improve the way search results are segmented and displayed also assists users to locate the contents they need. Finally, more user feedback is needed to fine‐tune the search parameters including α and β to improve the performance.

Practical implications

The proposed model can be used to improve the search performance of any search engine.

Originality/value

First, compared with the democratic voting procedure used by metasearch engines, search engine vector voting (SVV) enables a specific combination of search parameters, denoted as α and β, to be applied to a voted search engine, so that users can either narrow or expand their search results to meet their search preferences. Second, unlike page quality analysis, the hyperlink prediction (HLP) determines qualified pages by simply measuring their user behavior function (UBF) values, and thus takes less computing power. Finally, the advantages of HLP over statistical analysis are that it does not need training data, and it can target both multi‐site and site‐specific analysis.

Details

Internet Research, vol. 15 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 9 August 2011

Aurélie Gandour and Amanda Regolini

Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in…

Downloads
4624

Abstract

Purpose

Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in page” SEO: techniques to improve one's site content, hereby making it more attractive to human visitors as well as search engines, by making changes within the site's pages while focusing on chosen themes and keywords. The final goal is for the site to be better ranked by one or several targeted search engines and therefore appearing higher in their results lists for specified requests. This paper seeks to describe the steps one must take to reach such a goal, while focusing on the example of the website Fragfornet.

Design/methodology/approach

Fragfornet web pages have been generated through a “website factory” allowing the creation of dynamic websites on demand for the employees of Cemagref. This explains the steps to take to optimize for search engines any website using Zope Plone; even more broadly, the general recommendations described can be used by any website at all to gain more visibility on search engines. After a literature review about search engine optimization, the paper describes the methods used to optimize the website before exposing the results which were quickly obtained.

Findings

It was not long before the first effects of the SEO campaign were experienced. One week later, as soon as the Googlebots had crawled the site and stored a newer version of it within their databases, it immediately went up in the results pages for requests concerning forests fragmentation. This paper describes some of the parameters that were monitored and some of the conclusions drawn from them..

Originality/value

This paper's goal is to explain which steps to take for search engines to optimize any website elaborated through Cemagref website factory, or any website using Zope Plone. Even more broadly, the general recommendations described in this paper can be used by any librarian on any website to gain more visibility on search engines.

To view the access options for this content please click here
Article
Publication date: 1 February 2016

Mhamed Zineddine

– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

Downloads
1091

Abstract

Purpose

The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

Design/methodology/approach

A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.

Findings

The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.

Research limitations/implications

The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.

Practical implications

The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.

Originality/value

The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.

Details

Internet Research, vol. 26 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 May 2000

Mike Thelwall

How easy are business Web sites for potential customers to find? This paper reports on a survey of 60,087 Web sites from 42 of the major general and commercial domains…

Abstract

How easy are business Web sites for potential customers to find? This paper reports on a survey of 60,087 Web sites from 42 of the major general and commercial domains around the world to extract statistics about their design and rate of search engine registration. Search engines are used by the majority of Web surfers to find information on the Web. However, 23 per cent of business Web sites in the survey were not registered at all in the five major search engines tested and 82 per cent were not registered in at least one, missing a sizeable potential audience. There are some simple steps that should also be taken to help a Web site to be indexed properly in search engines, primarily the use of HTML META tags for indexing, but only about a third of the site home pages in the survey used them. Wide national variations were found for both indexing and META tag inclusion.

Details

Internet Research, vol. 10 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 1 October 1999

Herbert Snyder and Howard Rosenbaum

The paper investigates the problems of using commercial search engines for web‐link research and attempts to clarify the nature of the problems involved in the use of…

Downloads
569

Abstract

The paper investigates the problems of using commercial search engines for web‐link research and attempts to clarify the nature of the problems involved in the use of these engines. The research finds that search engines are highly variable in the results they produce, are limited in the search functions they offer, have poorly and/or incorrectly documented functions, use search logics that are opaque, and change the search functions they offer over time. The limitations which are inherent in commercial search engines should cause researchers to have reservations about any conclusions that rely on these tools as primary data‐gathering instruments. The short‐comings are market‐driven rather than inherent properties of the web or of websearching technologies. Improved functionalities are within the technical capabilities of search engine programmers and could be made available to the research community. The findings also offer mild support for the validity of the connection between web links and citations as analogues of intellectual linkage.

Details

Journal of Documentation, vol. 55 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 11000