Search results

1 – 10 of over 72000
To view the access options for this content please click here
Article
Publication date: 20 November 2009

Dirk Lewandowski

The purpose of this paper is to discuss ranking factors suitable for library materials and to show that ranking in general is a complex process and that ranking for…

Abstract

Purpose

The purpose of this paper is to discuss ranking factors suitable for library materials and to show that ranking in general is a complex process and that ranking for library materials requires a variety of techniques.

Design/methodology/approach

The relevant literature is reviewed to provide a systematic overview of suitable ranking factors. The discussion is based on an overview of ranking factors used in web search engines.

Findings

While there are a wide variety of ranking factors applicable to library materials, today's library systems use only some of them. When designing a ranking component for the library catalogue, an individual weighting of applicable factors is necessary.

Research limitations/implications

While the paper discusses different factors, no particular ranking formula is given. However, the paper presents the argument that such a formula must always be individual to a certain use case.

Practical implications

The factors presented can be considered when designing a ranking component for a library's search system or when discussing such a project with an ILS vendor.

Originality/value

The paper is original in that it is the first to systematically discuss ranking of library materials based on the main factors used by web search engines.

Details

Library Hi Tech, vol. 27 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 11 April 2016

Cheng-Jye Luh, Sheng-An Yang and Ting-Li Dean Huang

– The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.

Abstract

Purpose

The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.

Design/methodology/approach

The paper proposed an estimation function that defines the query match score of a search result as the weighted sum of scores from a limited set of factors. The search results for a query are re-ranked according to the query match scores. The effectiveness was measured by comparing the new ranks with the original ranks of search results.

Findings

The proposed method achieved the best SEO effectiveness when using the top 20 search results for a query. The empirical results reveal that PageRank (PR) is the dominant factor in Google ranking function. The title follows as the second most important, and the snippet and the URL have roughly equal importance with variations among queries.

Research limitations/implications

This study considered a limited set of ranking factors. The empirical results reveal that SEO effectiveness can be assessed by a simple estimation of ranking function even when the ranks of the new and original result sets are quite dissimilar.

Practical implications

The findings indicate that web marketers should pay particular attention to a webpage’s PR, and then place the keyword in URL, the page title, and snippet.

Originality/value

There have been ongoing concerns about how to formulate a simple strategy that can help a website get ranked higher in search engines. This study provides web marketers much needed empirical evidence about a simple way to foresee the ranking success of an SEO effort.

Details

Online Information Review, vol. 40 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 18 July 2016

Maayan Zhitomirsky-Geffet, Judit Bar-Ilan and Mark Levene

One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies…

Abstract

Purpose

One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgements of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the “wisdom of crowds”. The purpose of this paper is to examine whether aggregated judgements will be more stable and thus more reliable over time than individual user judgements.

Design/methodology/approach

In this study two simple measures are proposed to calculate the aggregated judgements of search results and compare their reliability and stability to individual user judgements. In addition, the aggregated “wisdom of crowds” judgements were used as a means to compare the differences between human assessments of search results and search engine’s rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgements were considered in this study: relevance on a four-point scale, and ranking on a ten-point scale without ties.

Findings

It was found that aggregated judgements are much more stable than individual user judgements, yet they are quite different from search engine rankings.

Practical implications

The proposed “wisdom of crowds”-based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalisation and adapting search engine’s ranking over time to changes in users preferences.

Originality/value

This is a first study that applies the notion of “wisdom of crowds” to examine an under-explored in the literature phenomenon of “change in time” in user evaluation of relevance.

Details

Aslib Journal of Information Management, vol. 68 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

To view the access options for this content please click here
Book part
Publication date: 24 November 2010

Dirk Lewandowski

This chapter outlines how search engine technology can be used in online public access catalogs (OPACs) to help improve users’ experiences, to identify users’ intentions…

Abstract

This chapter outlines how search engine technology can be used in online public access catalogs (OPACs) to help improve users’ experiences, to identify users’ intentions, and to indicate how it can be applied in the library context, along with how sophisticated ranking criteria can be applied to the online library catalog. A review of the literature and the current OPAC developments forms the basis of recommendations on how to improve OPACs. Findings were that the major shortcomings of current OPACs are that they are not sufficiently user-centered and that their results presentations lack sophistication. Furthermore, these shortcomings are not addressed in current 2.0 developments. It is argued that OPAC development should be made search-centered before additional features are applied. Although the recommendations on ranking functionality and the use of user intentions are only conceptual and not yet applied to a library catalogue, practitioners will find recommendations for developing better OPACs in this chapter. In short, readers will find a systematic view on how the search engines’ strengths can be applied to improving libraries’ online catalogs.

Details

Advances in Librarianship
Type: Book
ISBN: 978-1-84950-979-4

To view the access options for this content please click here
Article
Publication date: 27 November 2007

David Bade

The purpose of this paper is to examine the significance of the differences between the actual technical principles determining relevance ranking, and how relevance ranking

Abstract

Purpose

The purpose of this paper is to examine the significance of the differences between the actual technical principles determining relevance ranking, and how relevance ranking is understood, described and evaluated by the developers of relevance ranking algorithms and librarians.

Design/methodology/approach

The discussion uses descriptions by PLWeb Turbo and C2 of their relevance ranking products and a librarian's description on her blog with the responses which it drew, contrasting these with relevancy as it is indicated in studies of the ISI citation record reported by White.

Findings

The study finds that product descriptions and librarians consistently use the term “relevance ranking” to mean both the artificial relevance ranking by statistical methods using various surrogates assumed to reliably indicate relevance and the real relevance as determined by the searcher. The paper indicates the misunderstandings arising from this terminological confusion and its consequences in the context of the invalid user models and artificial searches which accompany discussions of “relevance ranking”.

Research limitations/implications

Evaluations of relevance ranking must be based on real users and real searches. Theorising relevance as a judgement about information rather than a property of information clarifies many issues.

Practical implications

The design of search engines and OPACs will benefit from incorporating metadata that contain indications of user‐determined relevance.

Originality/value

The activity of subject analysis and indexing by human beings is presented as an activity identical in kind to the real searcher's determination of relevance, a definite statement of relevancy arising from a real communication situation rather than a statistically indicated probability.

Details

Online Information Review, vol. 31 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 14 May 2018

Sholeh Arastoopoor

The degree to which a text is considered readable depends on the capability of the reader. This assumption puts different information retrieval systems at the risk of…

Abstract

Purpose

The degree to which a text is considered readable depends on the capability of the reader. This assumption puts different information retrieval systems at the risk of retrieving unreadable or hard-to-be-read yet relevant documents for their users. This paper aims to examine the potential use of concept-based readability measures along with classic measures for re-ranking search results in information retrieval systems, specifically in the Persian language.

Design/methodology/approach

Flesch–Dayani as a classic readability measure along with document scope (DS) and document cohesion (DC) as domain-specific measures have been applied for scoring the retrieved documents from Google (181 documents) and the RICeST database (215 documents) in the field of computer science and information technology (IT). The re-ranked result has been compared with the ranking of potential users regarding their readability.

Findings

The results show that there is a difference among subcategories of the computer science and IT field according to their readability and understandability. This study also shows that it is possible to develop a hybrid score based on DS and DC measures and, among all four applied scores in re-ranking the documents, the re-ranked list of documents based on the DSDC score shows correlation with re-ranking of the participants in both groups.

Practical implications

The findings of this study would foster a new option in re-ranking search results based on their difficulty for experts and non-experts in different fields.

Originality/value

The findings and the two-mode re-ranking model proposed in this paper along with its primary focus on domain-specific readability in the Persian language would help Web search engines and online databases in further refining the search results in pursuit of retrieving useful texts for users with differing expertise.

To view the access options for this content please click here
Article
Publication date: 29 November 2011

Judit Bar‐Ilan and Mark Levene

The aim of this paper is to develop a methodology for assessing search results retrieved from different sources.

Abstract

Purpose

The aim of this paper is to develop a methodology for assessing search results retrieved from different sources.

Design/methodology/approach

This is a two phase method, where in the first stage users select and rank the ten best search results from a randomly ordered set. In the second stage they are asked to choose the best pre‐ranked result from a set of possibilities. This two‐stage method allows users to consider each search result separately (in the first stage) and to express their views on the rankings as a whole, as they were retrieved by the search provider. The method was tested in a user study that compared different country‐specific search results of Google and Live Search (now Bing). The users were Israelis and the search results came from six sources: Google Israel, Google.com, Google UK, Live Search Israel, Live Search US and Live Search UK. The users evaluated the results of nine pre‐selected queries, created their own preferred ranking and picked the best ranking from the six sources.

Findings

The results indicate that the group of users in this study preferred their local Google interface, i.e. Google succeeded in its country‐specific customisation of search results. Live Search was much less successful in this aspect.

Research limitations/implications

Search engines are highly dynamic, thus the findings of the case study have to be viewed cautiously.

Originality/value

The main contribution of the paper is a two‐phase methodology for comparing and evaluating search results from different sources.

Details

Online Information Review, vol. 35 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 12 October 2018

Güleda Doğan and Umut Al

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World…

Abstract

Purpose

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World Universities (ARWU), NTU, University Ranking by Academic Performance (URAP), Quacquarelli Symonds (QS) and Round University Ranking (RUR)) over years, and show the effect of similar indicators on overall rankings for 2015. The research questions addressed in this study in accordance with these purposes are as follows: At what level are the intra-indicators used in international university rankings similar? Is it possible to group intra-indicators according to their similarities? What is the effect of similar intra-indicators on overall rankings?

Design/methodology/approach

Indicator-based scores of all universities in five research-focused international university rankings for all years they ranked form the data set of this study for the first and second research questions. The authors used a multidimensional scaling (MDS) and cosine similarity measure to analyze similarity of indicators and to answer these two research questions. Indicator-based scores and overall ranking scores for 2015 are used as data and Spearman correlation test is applied to answer the third research question.

Findings

Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators.

Research limitations/implications

CWTS, Mapping Scientific Excellence, Nature Index, and SCImago Institutions Rankings (until 2015) are not included in the scope of this paper, since they do not create overall ranking lists. Likewise, Times Higher Education, CWUR and US are not included because of not presenting indicator-based scores. Required data were not accessible for QS for 2010 and 2011. Moreover, although QS ranks more than 700 universities, only first 400 universities in 2012–2015 rankings were able to be analyzed. Although QS’s and RUR’s data were analyzed in this study, it was statistically not possible to reach any conclusion for these two rankings.

Practical implications

The results of this study may be considered mainly by ranking bodies, policy- and decision-makers. The ranking bodies may use the results to review the indicators they use, to decide on which indicators to use in their rankings, and to question if it is necessary to continue overall rankings. Policy- and decision-makers may also benefit from the results of this study by thinking of giving up using overall ranking results as an important input in their decisions and policies.

Originality/value

This study is the first to use a MDS and cosine similarity measure for revealing the similarity of indicators. Ranking data is skewed that require conducting nonparametric statistical analysis; therefore, MDS is used. The study covers all ranking years and all universities in the ranking lists, and is different from the similar studies in the literature that analyze data for shorter time intervals and top-ranked universities in the ranking lists. It can be said that the similarity of intra-indicators for URAP, NTU and RUR is analyzed for the first time in this study, based on the literature review.

To view the access options for this content please click here
Article
Publication date: 1 November 2006

Judit Bar‐Ilan, Mark Levene and Mazlita Mat‐Hassan

The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between…

Abstract

Purpose

The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines.

Design/methodology/approach

The papers compare rankings of the top‐ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top‐ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top‐ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine.

Findings

The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top‐ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging.

Originality/value

The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.

Details

Journal of Documentation, vol. 62 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 5 January 2018

Tehmina Amjad, Ali Daud and Naif Radi Aljohani

This study reviews the methods found in the literature for the ranking of authors, identifies the pros and cons of these methods, discusses and compares these methods. The…

Abstract

Purpose

This study reviews the methods found in the literature for the ranking of authors, identifies the pros and cons of these methods, discusses and compares these methods. The purpose of this paper is to study is to find the challenges and future directions of ranking of academic objects, especially authors, for future researchers.

Design/methodology/approach

This study reviews the methods found in the literature for the ranking of authors, classifies them into subcategories by studying and analyzing their way of achieving the objectives, discusses and compares them. The data sets used in the literature and the evaluation measures applicable in the domain are also presented.

Findings

The survey identifies the challenges involved in the field of ranking of authors and future directions.

Originality/value

To the best of the knowledge, this is the first survey that studies the author ranking problem in detail and classifies them according to their key functionalities, features and way of achieving the objective according to the requirement of the problem.

Details

Library Hi Tech, vol. 36 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 72000