Search results

1 – 10 of over 3000
Article
Publication date: 1 April 2000

David Green

The interrelation between Web publishing and information retrieval technologies is explored. The different elements of the Web have implications for indexing and searching Web…

2641

Abstract

The interrelation between Web publishing and information retrieval technologies is explored. The different elements of the Web have implications for indexing and searching Web pages. There are two main platforms used for searching the Web – directories and search engines – which later became combined to create one‐stop search sites, resulting in the Web business model known as portals. Portalisation gave rise to a second‐generation of firms delivering innovative search technology. Various new approaches to Web indexing and information retrieval are listed. PC‐based search tools incorporate intelligent agents to allow greater manipulation of search strategies and results. Current trends are discussed, in particular the rise of XML, and their implications for the future. It is concluded that the Web is emerging from a nascent stage and is evolving into a more complex, diverse and structured environment.

Details

Online Information Review, vol. 24 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 19 May 2021

Evagelos Varthis, Marios Poulos, Ilias Giarenis and Sozon Papavlasopoulos

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed…

Abstract

Purpose

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed framework is applied to the unstructured texts of Migne’s Patrologia Graeca (PG) collection, setting PG as an implementation example of the method.

Design/methodology/approach

The unstructured texts of PG have automatically transformed to a read-only not only Structured Query Language (NoSQL) database with a structure identical to that of a representational state transfer access point interface. The transformation makes it possible to execute queries and retrieve ranked results based on a specialized application of the extended Boolean model.

Findings

Using a specifically built Web-browser-based search tool, the user can quickly locate ranked relevant fragments of texts with the ability to navigate back and forth. The user can search using the initial part of words and by ignoring the diacritics of the Greek language. The performance of the search system is comparatively examined when different versions of hypertext transfer protocol (Http) are used for various network latencies and different modes of network connections. Queries using Http-2 have by far the best performance, compared to any of Http-1.1 modes.

Originality/value

The system is not limited to the case study of PG and has a generic application in the field of humanities. The expandability of the system in terms of semantic enrichment is feasible by taking into account synonyms and topics if they are available. The system’s main advantage is that it is totally static which implies important features such as simplicity, efficiency, fast response, portability, security and scalability.

Details

International Journal of Web Information Systems, vol. 17 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 July 2008

Ian Rowlands, David Nicholas, Peter Williams, Paul Huntington, Maggie Fieldhouse, Barrie Gunter, Richard Withey, Hamid R. Jamali, Tom Dobrowolski and Carol Tenopir

This article is an edited version of a report commissioned by the British Library and JISC to identify how the specialist researchers of the future (those born after 1993) are…

27816

Abstract

Purpose

This article is an edited version of a report commissioned by the British Library and JISC to identify how the specialist researchers of the future (those born after 1993) are likely to access and interact with digital resources in five to ten years' time. The purpose is to investigate the impact of digital transition on the information behaviour of the Google Generation and to guide library and information services to anticipate and react to any new or emerging behaviours in the most effective way.

Design/methodology/approach

The study was virtually longitudinal and is based on a number of extensive reviews of related literature, survey data mining and a deep log analysis of a British Library and a JISC web site intended for younger people.

Findings

The study shows that much of the impact of ICTs on the young has been overestimated. The study claims that although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

Originality/value

The paper reports on a study that overturns the common assumption that the “Google generation” is the most web‐literate.

Details

Aslib Proceedings, vol. 60 no. 4
Type: Research Article
ISSN: 0001-253X

Keywords

Abstract

Details

AI in Fashion Industry
Type: Book
ISBN: 978-1-80262-633-9

Article
Publication date: 14 September 2015

Natali Helberger, Katharina Kleinen-von Königslöw and Rob van der Noll

The purposes of this paper are to deal with the questions: because search engines, social networks and app-stores are often referred to as gatekeepers to diverse information…

2503

Abstract

Purpose

The purposes of this paper are to deal with the questions: because search engines, social networks and app-stores are often referred to as gatekeepers to diverse information access, what is the evidence to substantiate these gatekeeper concerns, and to what extent are existing regulatory solutions to control gatekeeper control suitable at all to address new diversity concerns? It will also map the different gatekeeper concerns about media diversity as evidenced in existing research before the background of network gatekeeping theory critically analyses some of the currently discussed regulatory approaches and develops the contours of a more user-centric approach towards approaching gatekeeper control and media diversity.

Design/methodology/approach

This is a conceptual research work based on desk research into the relevant and communications science, economic and legal academic literature and the relevant laws and public policy documents. Based on the existing evidence as well as on applying the insights from network gatekeeping theory, this paper then critically reviews the existing legal/policy discourse and identifies elements for an alternative approach.

Findings

This paper finds that when looking at search engines, social networks and app stores, many concerns about the influence of the new information intermediaries on media diversity have not so much their source in the control over critical resources or access to information, as the traditional gatekeepers do. Instead, the real bottleneck is access to the user, and the way the relationship between social network, search engine or app platforms and users is given form. Based on this observation, the paper concludes that regulatory initiatives in this area would need to pay more attention to the dynamic relationship between gatekeeper and gated.

Research limitations/implications

Because this is a conceptual piece based on desk-research, meaning that our assumptions and conclusions have not been validated by own empirical research. Also, although the authors have conducted to their best knowledge the literature review as broad and as concise as possible, seeing the breadth of the issue and the diversity of research outlets, it cannot be excluded that we have overlooked one or the other publication.

Practical implications

This paper makes a number of very concrete suggestions of how to approach potential challenges from the new information intermediaries to media diversity.

Social implications

The societal implications of search engines, social networks and app stores for media diversity cannot be overestimated. And yet, it is the position of users, and their exposure to diverse information that is often neglected in the current dialogue. By drawing attention to the dynamic relationship between gatekeeper and gated, this paper highlights the importance of this relationship for diverse exposure to information.

Originality/value

While there is currently much discussion about the possible challenges from search engines, social networks and app-stores for media diversity, a comprehensive overview in the scholarly literature on the evidence that actually exists is still lacking. And while most of the regulatory solutions still depart from a more pre-networked, static understanding of “gatekeeper”, we develop our analysis on the basis for a more dynamic approach that takes into account the fluid and interactive relationship between the roles of “gatekeepers” and “gated”. Seen from this perspective, the regulatory solutions discussed so far appear in a very different light.

Details

info, vol. 17 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

Article
Publication date: 7 December 2015

Zoe Dickinson and Mike Smit

The purpose of this paper is to examine the challenges and benefits presented by search engine visibility for public libraries. This paper outlines the preliminary results of a…

Abstract

Purpose

The purpose of this paper is to examine the challenges and benefits presented by search engine visibility for public libraries. This paper outlines the preliminary results of a pilot study investigating search engine visibility in two Canadian public libraries, and discusses practical approaches to search engine visibility.

Design/methodology/approach

The study consists of semi-structured interviews with librarians from two multi-branch Canadian public library systems, combined with quantitative data provided by each library, as well as data obtained through site-specific searches in Google and Bing. Possible barriers to visibility are identified through thematic analysis of the interviews. Practical approaches are identified by the author based on a literature review.

Findings

The initial findings of this pilot study identify a complex combination of barriers to visibility on search engines, in the form of attitudes, policies, organizational structures and technological difficulties.

Research limitations/implications

This paper describes a small, preliminary pilot study. More research is needed before any firm conclusions can be reached.

Practical implications

A review of the literature shows the increasing importance of search engine visibility for public libraries. This paper outlines practical approaches which can be undertaken immediately by libraries, as well as delving into the underlying issues which may be affecting libraries’ progress on the issue.

Originality/value

There has been little original research investigating the reasons behind libraries’ lack of visibility in search engine results pages. This paper provides insight into a previously unexplored area by exploring public libraries’ relationships with search engines.

Details

Library Hi Tech News, vol. 32 no. 10
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 16 August 2022

Xiaoyi Sylvia Gao, Imran S. Currim and Sanjeev Dewan

This paper aims to demonstrate how consumer clickstream data from a leading hotel search engine can be used to validate two hidden information processing stages – first eliminate…

Abstract

Purpose

This paper aims to demonstrate how consumer clickstream data from a leading hotel search engine can be used to validate two hidden information processing stages – first eliminate alternatives, then choose – proposed by the revered information processing theory of consumer choice.

Design/methodology/approach

This study models the two hidden information processing stages as hidden states in a hidden Markov model, estimated on consumer search behavior, product attributes and diversity of alternatives in the consideration set.

Findings

First, the stage of information processing can be statistically characterized in terms of consumer search covariates, including trip characteristics, use of search tools and the diversity of the consideration set, operationalized in terms of: number of brands, dispersion of price and dispersion of quality. Second, users are more sensitive to price and quality in the first rather than the second stage, which is closer to purchase.

Research limitations/implications

The results suggest practical implications for how search engine managers can target consumers with appropriate marketing-mix actions, based on which information processing stage consumers might be in.

Originality/value

Most previous studies on validating the information processing theory of consumer choice have used laboratory experiments, subjects and information display boards comprising hypothetical product alternatives and attributes. Only a few studies use observational data. In contrast, this study uniquely uses point-of-purchase clickstream data on actual visitors at a leading hotel search engine and tests the theory based on real products, attributes and diversity of the consideration set.

Details

European Journal of Marketing, vol. 56 no. 8
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 1 May 2006

Alan Dawson and Val Hamilton

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such…

3175

Abstract

Purpose

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such as Google.

Design/methodology/approach

The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines.

Findings

The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high‐quality metadata.

Originality/value

The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public‐sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.

Details

Journal of Documentation, vol. 62 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 6 August 2018

Yanwu Yang, Xin Li, Daniel Zeng and Bernard J. Jansen

The purpose of this paper is to model group advertising decisions, which are the collective decisions of every single advertiser within the set of advertisers who are competing in…

1351

Abstract

Purpose

The purpose of this paper is to model group advertising decisions, which are the collective decisions of every single advertiser within the set of advertisers who are competing in the same auction or vertical industry, and examine resulting market outcomes, via a proposed simulation framework named Experimental Platform for Search Engine Advertising (EXP-SEA) supporting experimental studies of collective behaviors in the context of search engine advertising.

Design/methodology/approach

The authors implement the EXP-SEA to validate the proposed simulation framework, also conduct three experimental studies on the aggregate impact of electronic word-of-mouth (eWOM), the competition level and strategic bidding behaviors. EXP-SEA supports heterogeneous participants, various auction mechanisms and also ranking and pricing algorithms.

Findings

Findings from the three experiments show that both the market profit and advertising indexes such as number of impressions and number of clicks are larger when the eWOM effect is present, meaning social media certainly has some effect on search engine advertising outcomes, the competition level has a monotonic increasing effect on the market performance, thus search engines have an incentive to encourage both the eWOM among search users and competition among advertisers, and given the market-level effect of the percentage of advertisers employing a dynamic greedy bidding strategy, there is a cut-off point for strategic bidding behaviors.

Originality/value

This is one of the first research works to explore collective group decisions and resulting phenomena in the complex context of search engine advertising via developing and validating a simulation framework that supports assessments of various advertising strategies and estimations of the impact of mechanisms on the search market.

Details

Internet Research, vol. 28 no. 4
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 22 November 2011

A. Hossein Farajpahlou and Faeze Tabatabai

The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information…

1823

Abstract

Purpose

The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information environments by general search engines such as Google and Yahoo!

Design/methodology/approach

In total, 100 XML content objects were divided into two groups: those with DCXML elements and those with MARCXML elements. Both groups were published on the web site www.marcdcmi.ir in late July 2009 and were online until June 2010. The web site was introduced to Google and Yahoo! search engines. The indexing quality of metadata elements embedded in the content objects in a dynamic online information environment and their indexing and ranking capabilities were compared and examined.

Findings

Google search engine was able to retrieve fully all the content objects through their Dublin Core and MARC 21 metadata elements; Yahoo! search engine, however, did not respond at all. Results of the study showed that all Dublin Core and MARC 21 metadata elements were indexed by Google search engine. No difference was observed between indexing quality and ranking of DCXML metadata elements with that of MARCXML. The results of the study revealed that neither the XML‐based Dublin Core Metadata Initiative nor MARC 21 demonstrate any preference regarding access in dynamic online information environments through Google search engine.

Practical implications

The findings can provide useful information for search engine designers.

Originality/value

The present study was conducted for the first time in dynamic environments using XML‐based metadata elements. It can provide grounds for further studies of this kind.

Details

Aslib Proceedings, vol. 63 no. 6
Type: Research Article
ISSN: 0001-253X

Keywords

1 – 10 of over 3000