Search results
1 – 10 of over 9000Yvonne Kammerer and Peter Gerjets
Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine interfaces can…
Abstract
Purpose — To provide an overview of recent research that examined how search engine users evaluate and select Web search results and how alternative search engine interfaces can support Web users' credibility assessment of Web search results.
Design/methodology/approach — As theoretical background, Information Foraging Theory (Pirolli, 2007; Pirolli & Card, 1999) from cognitive science and Prominence-Interpretation-Theory (Fogg, 2003) from communication and persuasion research are presented. Furthermore, a range of recent empirical research that investigated the effects of alternative SERP layouts on searchers' information quality or credibility assessments of search results are reviewed and approaches that aim at automatically classifying search results according to specific genre categories are reported.
Findings — The chapter reports on findings that Web users often rely heavily on the ranking provided by the search engines without paying much attention to the reliability or trustworthiness of the Web pages. Furthermore, the chapter outlines how alternative search engine interfaces that display search results in a format different from a list and/or provide prominent quality-related cues in the SERPs can foster searchers' credibility evaluations.
Research limitations/implications — The reported empirical studies, search engine interfaces, and Web page classification systems are not an exhaustive list.
Originality/value — The chapter provides insights for researchers, search engine developers, educators, and students on how the development and use of alternative search engine interfaces might affect Web users' search and evaluation strategies during Web search as well as their search outcomes in terms of retrieving high-quality, credible information.
Details
Keywords
The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between Google and…
Abstract
Purpose
The purpose of this paper is to analyze the readability and level of word complexity of search engine results pages (SERPs) snippets and associated web pages between Google and Bing.
Design/methodology/approach
The authors employed the Readability Test Tool to analyze the readability and word complexity of 3,000 SERPs snippets and 3,000 associated pages in Google and Bing retrieved on 150 search queries issued by middle school children.
Findings
A significant difference was found in the readability of SERPs snippets and associated web pages between Google and Bing. A significant difference was also observed in the number of complex words in snippets between the two engines but not in associated web pages. At the engine level, the readability of Google and Bing snippets was significantly higher than associated web pages. The readability of Google SERPs snippets was at a much higher level than those of Bing. The readability of snippets in both engines mismatched with the reading comprehension of children in grades 6–8.
Research limitations/implications
The data corpus may be small. Analysis relied on quantitative measures.
Practical implications
Practitioners and other mediators should mitigate the readability issue in SERPs snippets. Researchers should consider text readability and word complexity simultaneously with other factors to obtain the nuanced understanding of young users’ web information behaviors. Additional theoretical and methodological implications are discussed.
Originality/value
This study measured the readability and the level of word complexity embedded in SERPs snippets and compared them to respective web pages in Google and Bing. Findings provide further evidence of the readability issue of SERPs snippets and the need to solve this issue through system design improvements.
Details
Keywords
Judit Bar‐Ilan, Mark Levene and Mazlita Mat‐Hassan
The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these…
Abstract
Purpose
The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines.
Design/methodology/approach
The papers compare rankings of the top‐ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top‐ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top‐ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine.
Findings
The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top‐ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging.
Originality/value
The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
Details
Keywords
Daniel Onaifo and Diane Rasmussen
The aim of this paper is to examine the phenomenon of search engine optimization (SEO) as a mechanism for improving libraries' digital content findability on the web.
Abstract
Purpose
The aim of this paper is to examine the phenomenon of search engine optimization (SEO) as a mechanism for improving libraries' digital content findability on the web.
Design/methodology/approach
The study applies web analytical tools, such as Alexa.com, in the collection of data about Canadian libraries' visibility performance in the ranking of search engine results. Concepts from the Integrated IS&R Research Framework are applied to analyze SEO as an element within the Framework.
Findings
The results show that certain websites' characteristics do have an effect on how well libraries' websites are ranked by search engines. Notably, the reputation of a library's website and the number of its search engine indexed webpages increase its ranking on SERPs as well as the findability of its digital content.
Originality/value
Most of the existing works on SEO have been confined to popular literature, outside of scholarly academic research in library and information science. Only few studies with a focus on libraries' application of SEO exist. No known study has applied an empirical approach to the examination of relevant libraries' website characteristics to determine their visibility performance on search engine result pages (SERPs). This study identified several website characteristics that can be optimized for higher SERP rankings. It also analyzed the impact of external links, as well as that of the number of indexed webpages by search engines on higher SERP rankings.
Details
Keywords
Cheng-Jye Luh, Sheng-An Yang and Ting-Li Dean Huang
– The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.
Abstract
Purpose
The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.
Design/methodology/approach
The paper proposed an estimation function that defines the query match score of a search result as the weighted sum of scores from a limited set of factors. The search results for a query are re-ranked according to the query match scores. The effectiveness was measured by comparing the new ranks with the original ranks of search results.
Findings
The proposed method achieved the best SEO effectiveness when using the top 20 search results for a query. The empirical results reveal that PageRank (PR) is the dominant factor in Google ranking function. The title follows as the second most important, and the snippet and the URL have roughly equal importance with variations among queries.
Research limitations/implications
This study considered a limited set of ranking factors. The empirical results reveal that SEO effectiveness can be assessed by a simple estimation of ranking function even when the ranks of the new and original result sets are quite dissimilar.
Practical implications
The findings indicate that web marketers should pay particular attention to a webpage’s PR, and then place the keyword in URL, the page title, and snippet.
Originality/value
There have been ongoing concerns about how to formulate a simple strategy that can help a website get ranked higher in search engines. This study provides web marketers much needed empirical evidence about a simple way to foresee the ranking success of an SEO effort.
Details
Keywords
Artur Strzelecki and Andrej Miklosik
The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method…
Abstract
Purpose
The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method for accessing data from the Google search engine using programmatic access and calculating CTR values from the retrieved data to show how the CTRs have changed since the last studies were published.
Design/methodology/approach
In this study, the authors present the estimated CTR values in organic search results based on actual clicks and impressions data, and establish a protocol for collecting this data using Google programmatic access. For this study, the authors collected data on 416,386 clicks, 31,648,226 impressions and 8,861,416 daily queries.
Findings
The results show that CTRs have decreased from previously reported values in both academic research and industry benchmarks. The estimates indicate that the top-ranked result in Google's organic search results features a CTR of 9.28%, followed by 5.82 and 3.11% for positions two and three, respectively. The authors also demonstrate that CTRs vary across various types of devices. On desktop devices, the CTR decreases steadily with each lower ranking position. On smartphones, the CTR starts high but decreases rapidly, with an unprecedented increase from position 13 onwards. Tablets have the lowest and most variable CTR values.
Practical implications
The theoretical implications include the generation of a current dataset on search engine results and user behavior, made available to the research community, creation of a unique methodology for generating new datasets and presenting the updated information on CTR trends. The managerial implications include the establishment of the need for businesses to focus on optimizing other forms of Google search results in addition to organic text results, and the possibility of application of this study's methodology to determine CTRs for their own websites.
Originality/value
This study provides a novel method to access real CTR data and estimates current CTRs for top organic Google search results, categorized by device.
Details
Keywords
Search engines usually offer a date‐restricted search on their advanced search pages. But determining the actual update of a web page is not without problems. Conducts a study…
Abstract
Search engines usually offer a date‐restricted search on their advanced search pages. But determining the actual update of a web page is not without problems. Conducts a study testing date‐restricted queries on the search engines Google, Teoma and Yahoo! Finds that these searches fail to work properly in the engines examined. Finally, discusses implications of this for further research and search engine development.
Details
Keywords
The purpose of this paper is to identify the most popular techniques used to rank a web page highly in Google.
Abstract
Purpose
The purpose of this paper is to identify the most popular techniques used to rank a web page highly in Google.
Design/methodology/approach
The paper presents the results of a study into 50 highly optimized web pages that were created as part of a Search Engine Optimization competition. The study focuses on the most popular techniques that were used to rank highest in this competition, and includes an analysis on the use of PageRank, number of pages, number of in‐links, domain age and the use of third party sites such as directories and social bookmarking sites. A separate study was made into 50 non‐optimized web pages for comparison.
Findings
The paper provides insight into the techniques that successful Search Engine Optimizers use to ensure a page ranks highly in Google. Recognizes the importance of PageRank and links as well as directories and social bookmarking sites.
Research limitations/implications
Only the top 50 web sites for a specific query were analyzed. Analysing more web sites and comparing with similar studies in different competition would provide more concrete results.
Practical implications
The paper offers a revealing insight into the techniques used by industry experts to rank highly in Google, and the success or otherwise of those techniques.
Originality/value
This paper fulfils an identified need for web sites and e‐commerce sites keen to attract a wider web audience.
Details
Keywords
The purpose of this paper is to compare five major web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only…
Abstract
Purpose
The purpose of this paper is to compare five major web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only the results, but also the results descriptions.
Design/methodology/approach
The study uses real‐life queries. Results are made anonymous and are randomized. Results are judged by the persons posing the original queries.
Findings
The two major search engines, Google and Yahoo, perform best, and there are no significant differences between them. Google delivers significantly more relevant result descriptions than any other search engine. This could be one reason for users perceiving this engine as superior.
Research limitations/implications
The study is based on a user model where the user takes into account a certain amount of results rather systematically. This may not be the case in real life.
Practical implications
The paper implies that search engines should focus on relevant descriptions. Searchers are advised to use other search engines in addition to Google.
Originality/value
This is the first major study comparing results and descriptions systematically and proposes new retrieval measures to take into account results descriptions.
Details