Search results

21 – 30 of over 7000
Article
Publication date: 10 January 2024

Artur Strzelecki and Andrej Miklosik

The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method…

36

Abstract

Purpose

The landscape of search engine usage has evolved since the last known data were used to calculate click-through rate (CTR) values. The objective was to provide a replicable method for accessing data from the Google search engine using programmatic access and calculating CTR values from the retrieved data to show how the CTRs have changed since the last studies were published.

Design/methodology/approach

In this study, the authors present the estimated CTR values in organic search results based on actual clicks and impressions data, and establish a protocol for collecting this data using Google programmatic access. For this study, the authors collected data on 416,386 clicks, 31,648,226 impressions and 8,861,416 daily queries.

Findings

The results show that CTRs have decreased from previously reported values in both academic research and industry benchmarks. The estimates indicate that the top-ranked result in Google's organic search results features a CTR of 9.28%, followed by 5.82 and 3.11% for positions two and three, respectively. The authors also demonstrate that CTRs vary across various types of devices. On desktop devices, the CTR decreases steadily with each lower ranking position. On smartphones, the CTR starts high but decreases rapidly, with an unprecedented increase from position 13 onwards. Tablets have the lowest and most variable CTR values.

Practical implications

The theoretical implications include the generation of a current dataset on search engine results and user behavior, made available to the research community, creation of a unique methodology for generating new datasets and presenting the updated information on CTR trends. The managerial implications include the establishment of the need for businesses to focus on optimizing other forms of Google search results in addition to organic text results, and the possibility of application of this study's methodology to determine CTRs for their own websites.

Originality/value

This study provides a novel method to access real CTR data and estimates current CTRs for top organic Google search results, categorized by device.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 1 May 2006

Alan Dawson and Val Hamilton

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such…

3175

Abstract

Purpose

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such as Google.

Design/methodology/approach

The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines.

Findings

The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high‐quality metadata.

Originality/value

The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public‐sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.

Details

Journal of Documentation, vol. 62 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 29 November 2011

Judit Bar‐Ilan and Mark Levene

The aim of this paper is to develop a methodology for assessing search results retrieved from different sources.

1050

Abstract

Purpose

The aim of this paper is to develop a methodology for assessing search results retrieved from different sources.

Design/methodology/approach

This is a two phase method, where in the first stage users select and rank the ten best search results from a randomly ordered set. In the second stage they are asked to choose the best pre‐ranked result from a set of possibilities. This two‐stage method allows users to consider each search result separately (in the first stage) and to express their views on the rankings as a whole, as they were retrieved by the search provider. The method was tested in a user study that compared different country‐specific search results of Google and Live Search (now Bing). The users were Israelis and the search results came from six sources: Google Israel, Google.com, Google UK, Live Search Israel, Live Search US and Live Search UK. The users evaluated the results of nine pre‐selected queries, created their own preferred ranking and picked the best ranking from the six sources.

Findings

The results indicate that the group of users in this study preferred their local Google interface, i.e. Google succeeded in its country‐specific customisation of search results. Live Search was much less successful in this aspect.

Research limitations/implications

Search engines are highly dynamic, thus the findings of the case study have to be viewed cautiously.

Originality/value

The main contribution of the paper is a two‐phase methodology for comparing and evaluating search results from different sources.

Details

Online Information Review, vol. 35 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 7 July 2011

Dirk Lewandowski

The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages.

4748

Abstract

Purpose

The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages.

Design/methodology/approach

In total, 100 user queries are posed to six search engines (Google, Yahoo!, MSN, Ask, Seekport, and Exalead). Users described the desired pages, and the results position of these was recorded. Measured success and mean reciprocal rank are calculated.

Findings

The performance of the major search engines Google, Yahoo!, and MSN was found to be the best, with around 90 per cent of queries answered correctly. Ask and Exalead performed worse but received good scores as well.

Research limitations/implications

All queries were in German, and the German‐language interfaces of the search engines were used. Therefore, the results are only valid for German queries.

Practical implications

When designing a search engine to compete with the major search engines, care should be taken on the performance on navigational queries. Users can be influenced easily in their quality ratings of search engines based on this performance.

Originality/value

This study systematically compares the major search engines on navigational queries and compares the findings with studies on the retrieval effectiveness of the engines on informational queries.

Article
Publication date: 10 October 2016

Olof Sundin and Hanna Carlsson

This paper investigates the experiences of school teachers of supporting pupils and their apprehensions of how pupils search and assess information when search engines have become…

1519

Abstract

Purpose

This paper investigates the experiences of school teachers of supporting pupils and their apprehensions of how pupils search and assess information when search engines have become a technology of literacy in schools. By situating technologies of literacy as sociomaterial the purpose of this paper is to analyse and discuss these experiences and understandings in order to challenge dominant views of search in information literacy research.

Design/methodology/approach

Six focus group interviews with in total 39 teachers working at four different elementary and secondary schools were conducted in the autumn of 2014. Analysis was done using a sociomaterial perspective, which provides tools for understanding how pupils and teachers interact with and are demanded to translate their interest to technologies of literacy, in this case search engines, such as Google.

Findings

The teachers expressed difficulties of conceptualizing search as something they could teach. When they did, search was most often identified as a practical skill. A critical perspective on search, recognizing the role of Google as a dominant part of the information infrastructure and a co-constructor of what there is to know was largely lacking. As a consequence of this neglected responsibility of teaching search, critical assessment of online information was conflated with Google’s relevance ranking.

Originality/value

The study develops a critical understanding of the role of searching and search engines as technologies of literacy in relation to critical assessment in schools. This is of value for information literacy training.

Details

Journal of Documentation, vol. 72 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 1 April 2005

Margaret Markland

To compare the resource discovery network (RDN) hubs and Google as search tools within an academic context, taking into account well documented user information seeking…

2017

Abstract

Purpose

To compare the resource discovery network (RDN) hubs and Google as search tools within an academic context, taking into account well documented user information seeking behaviours. To find out whether the students' apparent preference for search engines as an information retrieval tool means that they might miss quality online resources to support their academic work.

Design/methodology/approach

With key factors about user behaviour and service provision in mind, to conduct a small study to see what students are actually presented with when they search for online information for their academic studies, by comparing search results from the RDN hubs and Google.

Findings

Analysis of results suggests that the exclusive use of search engines will lead to users missing the high quality resources provided by the RDN hubs, that if users use subject gateways in the same way that they use search engines they are likely to miss much that the hubs' sophisticated structures and search options have to offer them, and that search engines do provide access to quality resources.

Research limitations/implications

A larger scale investigation of the level of sophistication of searching behaviour among hubs users is called for.

Practical implications

The study emphasizes the need for online information service developers to take into account well documented user behaviours when designing new services.

Originality/value

The paper will be of value to researchers in the fields of information retrieval and information seeking behaviour, and to developers and providers of online information services to the academic community.

Details

Performance Measurement and Metrics, vol. 6 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 19 October 2018

Artur Strzelecki

The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so…

3614

Abstract

Purpose

The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results.

Design/methodology/approach

Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD).

Findings

Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports).

Research limitations/implications

As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered.

Originality/value

This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.

Details

Aslib Journal of Information Management, vol. 71 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 13 March 2018

Cristina I. Font-Julian, José-Antonio Ontalba-Ruipérez and Enrique Orduña-Malea

The purpose of this paper is to determine the effect of the chosen search engine results page (SERP) on the website-specific hit count estimation indicator.

4426

Abstract

Purpose

The purpose of this paper is to determine the effect of the chosen search engine results page (SERP) on the website-specific hit count estimation indicator.

Design/methodology/approach

A sample of 100 Spanish rare disease association websites is analysed, obtaining the website-specific hit count estimation for the first and last SERPs in two search engines (Google and Bing) at two different periods in time (2016 and 2017).

Findings

It has been empirically demonstrated that there are differences between the number of hits returned on the first and last SERP in both Google and Bing. These differences are significant when they exceed a threshold value on the first SERP.

Research limitations/implications

Future studies considering other samples, more SERPs and generating different queries other than website page count (<site>) would be desirable to draw more general conclusions on the nature of quantitative data provided by general search engines.

Practical implications

Selecting a wrong SERP to calculate some metrics (in this case, website-specific hit count estimation) might provide misleading results, comparisons and performance rankings. The empirical data suggest that the first SERP captures the differences between websites better because it has a greater discriminating power and is more appropriate for webometric longitudinal studies.

Social implications

The findings allow improving future quantitative webometric analyses based on website-specific hit count estimation metrics in general search engines.

Originality/value

The website-specific hit count estimation variability between SERPs has been empirically analysed, considering two different search engines (Google and Bing), a set of 100 websites focussed on a similar market (Spanish rare diseases associations), and two annual samples, making this study the most exhaustive on this issue to date.

Details

Aslib Journal of Information Management, vol. 70 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 3 August 2012

Sayyed Mahdi Taheri and Nadjla Hariri

The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements…

1088

Abstract

Purpose

The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements by general search engines (Google and Yahoo!), in a comparative analytical study.

Design/methodology/approach

One hundred XML content objects in two groups were analyzed: those with MARCXML elements (50 records) and those with DCXML (50 records) published on two web sites (www.dcmixml.islamicdoc.org and www.marcxml.islamicdoc.org).The web sites were then introduced to the Google and Yahoo! search engines.

Findings

The indexing of metadata records and the difference between their indexing and ranking were examined using descriptive statistics and a non‐parametric Mann‐Whitney U test. The findings show that the visibility of content objects was possible by all their metadata elements. There was no significant difference between two groups' indexing, but a difference was observed in terms of ranking.

Practical implications

The findings of this research can help search engine designers in the optimum use of metadata elements to improve their indexing and ranking process with the aim of increasing availability. The findings can also help web content object providers in the proper and efficient use of metadata systems.

Originality/value

This is the first research to examine the interoperability between XML‐based metadata and web search engines, and compares the MARC format and DCMI in a research approach.

Details

The Electronic Library, vol. 30 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 12 June 2014

Liwen Vaughan

The purpose of this paper is to examine the feasibility of discovering business information from search engine query data. Specifically the study tried to determine whether search

1378

Abstract

Purpose

The purpose of this paper is to examine the feasibility of discovering business information from search engine query data. Specifically the study tried to determine whether search volumes of company names are correlated with the companies’ business performance and position data.

Design/methodology/approach

The top 50 US companies in the 2012 Fortune 500 list were included in the study. The following business performance and position data were collected: revenues, profits, assets, stockholders’ equity, profits as a percentage of revenues, and profits as a percentage of assets. Data on the search volumes of the company names were collected from Google Trends, which is based on search queries users enter into Google. Google Trends data were collected in the two scenarios of worldwide searches and US searches.

Findings

The study found significant correlations between search volume data and business performance and position data, suggesting that search engine query data can be used to discover business information. Google Trends’ worldwide search data were better than the US domestic search data for this purpose.

Research limitations/implications

The study is limited to only one country and to one year of data.

Practical implications

Publicly available search engine query data such as those from Google Trends can be used to estimate business performance and position data which are not always publicly available. Search engine query data are timelier than business data.

Originality/value

This is the first study to establish a relationship between search engine query data and business performance and position data.

Details

Online Information Review, vol. 38 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

21 – 30 of over 7000