Search results

1 – 10 of over 3000
Article
Publication date: 9 August 2011

Aurélie Gandour and Amanda Regolini

Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in page”…

5311

Abstract

Purpose

Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in page” SEO: techniques to improve one's site content, hereby making it more attractive to human visitors as well as search engines, by making changes within the site's pages while focusing on chosen themes and keywords. The final goal is for the site to be better ranked by one or several targeted search engines and therefore appearing higher in their results lists for specified requests. This paper seeks to describe the steps one must take to reach such a goal, while focusing on the example of the website Fragfornet.

Design/methodology/approach

Fragfornet web pages have been generated through a “website factory” allowing the creation of dynamic websites on demand for the employees of Cemagref. This explains the steps to take to optimize for search engines any website using Zope Plone; even more broadly, the general recommendations described can be used by any website at all to gain more visibility on search engines. After a literature review about search engine optimization, the paper describes the methods used to optimize the website before exposing the results which were quickly obtained.

Findings

It was not long before the first effects of the SEO campaign were experienced. One week later, as soon as the Googlebots had crawled the site and stored a newer version of it within their databases, it immediately went up in the results pages for requests concerning forests fragmentation. This paper describes some of the parameters that were monitored and some of the conclusions drawn from them..

Originality/value

This paper's goal is to explain which steps to take for search engines to optimize any website elaborated through Cemagref website factory, or any website using Zope Plone. Even more broadly, the general recommendations described in this paper can be used by any librarian on any website to gain more visibility on search engines.

Article
Publication date: 20 June 2016

Sungin Lee, Wonhong Jang, Eunsol Lee and Sam G. Oh

The purpose of this paper is to examine the effect of, and identify core techniques of, search engine optimization (SEO) techniques applied to the web (http://lg-sl.net) and…

6981

Abstract

Purpose

The purpose of this paper is to examine the effect of, and identify core techniques of, search engine optimization (SEO) techniques applied to the web (http://lg-sl.net) and mobile (http//m.lg-sl.net) Science Land content and services at LG Sangnam Library in Korea.

Design/methodology/approach

In accordance with three major SEO guidelines, ten SEO techniques were identified and applied, and their implications were extracted on three areas: improved search engine accessibility, increased relevance between site content and search engine keywords, and improved site credibility. The effects were quantitatively analyzed in terms of registered search engine keywords and influx of visits via search engines.

Findings

This study shows that SEO techniques help increase the exposure of the library services and the number of visitors through search engines.

Practical implications

SEO techniques have been applied to a few non-Korean information service organizations, but it is not a well-accepted practice in Korean libraries. And the dominant search engines in Korea have published their own SEO guidelines. Prior to this study, no significant endeavors have been undertaken in the context of Korean library services that have adopted SEO techniques to boost exposure of library services and increase user traffics.

Originality/value

This is the first published study that has applied optimized SEO techniques to Korean web and mobile library services, in order to demonstrate the usefulness of the techniques for maximized exposure of library content.

Details

Library Hi Tech, vol. 34 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 1 March 2013

Daniel Onaifo and Diane Rasmussen

The aim of this paper is to examine the phenomenon of search engine optimization (SEO) as a mechanism for improving libraries' digital content findability on the web.

3905

Abstract

Purpose

The aim of this paper is to examine the phenomenon of search engine optimization (SEO) as a mechanism for improving libraries' digital content findability on the web.

Design/methodology/approach

The study applies web analytical tools, such as Alexa.com, in the collection of data about Canadian libraries' visibility performance in the ranking of search engine results. Concepts from the Integrated IS&R Research Framework are applied to analyze SEO as an element within the Framework.

Findings

The results show that certain websites' characteristics do have an effect on how well libraries' websites are ranked by search engines. Notably, the reputation of a library's website and the number of its search engine indexed webpages increase its ranking on SERPs as well as the findability of its digital content.

Originality/value

Most of the existing works on SEO have been confined to popular literature, outside of scholarly academic research in library and information science. Only few studies with a focus on libraries' application of SEO exist. No known study has applied an empirical approach to the examination of relevant libraries' website characteristics to determine their visibility performance on search engine result pages (SERPs). This study identified several website characteristics that can be optimized for higher SERP rankings. It also analyzed the impact of external links, as well as that of the number of indexed webpages by search engines on higher SERP rankings.

Article
Publication date: 8 December 2020

Sebastian Schultheiß and Dirk Lewandowski

In commercial web search engine results rankings, four stakeholder groups are involved: search engine providers, users, content providers and search engine optimizers. Search

1965

Abstract

Purpose

In commercial web search engine results rankings, four stakeholder groups are involved: search engine providers, users, content providers and search engine optimizers. Search engine optimization (SEO) is a multi-billion-dollar industry and responsible for making content visible through search engines. Despite this importance, little is known about its role in the interaction of the stakeholder groups.

Design/methodology/approach

We conducted expert interviews with 15 German search engine optimizers and content providers, the latter represented by content managers and online journalists. The interviewees were asked about their perspectives on SEO and how they assess the views of users about SEO.

Findings

SEO was considered necessary for content providers to ensure visibility, which is why dependencies between both stakeholder groups have evolved. Despite its importance, SEO was seen as largely unknown to users. Therefore, it is assumed that users cannot realistically assess the impact SEO has and that user opinions about SEO depend heavily on their knowledge of the topic.

Originality/value

This study investigated search engine optimization from the perspective of those involved in the optimization business: content providers, online journalists and search engine optimization professionals. The study therefore contributes to a more nuanced view on and a deeper understanding of the SEO domain.

Article
Publication date: 2 August 2013

Lourdes Moreno and Paloma Martinez

The purpose of this paper is to show that the pursuit of a high search engine relevance ranking for a webpage is not necessarily incompatible with the pursuit of web accessibility.

3767

Abstract

Purpose

The purpose of this paper is to show that the pursuit of a high search engine relevance ranking for a webpage is not necessarily incompatible with the pursuit of web accessibility.

Design/methodology/approach

The research described arose from an investigation into the observed phenomenon that pages from accessible websites regularly appear near the top of search engine (such as Google) results, without any deliberate effort having been made through the application of search engine optimization (SEO) techniques to achieve this. The reasons for this phenomenon appear to be found in the numerous similarities and overlapping characteristics between SEO factors and web accessibility guidelines. Context is provided through a review of sources including accessibility standards and relevant SEO studies and the relationship between SEO and web accessibility is described. The particular overlapping factors between the two are identified and the precise nature of the overlaps is explained in greater detail.

Findings

The available literature provides firm evidence that the overlapping factors not only serve to ensure the accessibility of a website for all users, but are also useful for the optimization of the website's search engine ranking. The research demonstrates that any SEO project undertaken should include, as a prerequisite, the proper design of accessible web content, inasmuch as search engines will interpret the web accessibility achieved as an indicator of quality and will be able to better access and index the resulting web content.

Originality/value

The present study indicates how developing websites with high visibility in search engine results also makes their content more accessible.

Details

Online Information Review, vol. 37 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Book part
Publication date: 26 November 2020

Beyza Gultekin and Sabri Erdem

This study explores the importance of application search engine (ASE) technology in the omni-channel strategy. For this purpose, this chapter firstly explains the concepts of the…

Abstract

This study explores the importance of application search engine (ASE) technology in the omni-channel strategy. For this purpose, this chapter firstly explains the concepts of the omni-channel and the search engines and the importance of them. Then, omni-channel in the framework of ASEs is discussed. Finally, recommendations for further researches are presented.

Details

Managing Customer Experiences in an Omnichannel World: Melody of Online and Offline Environments in the Customer Journey
Type: Book
ISBN: 978-1-80043-389-2

Keywords

Article
Publication date: 11 April 2016

Cheng-Jye Luh, Sheng-An Yang and Ting-Li Dean Huang

– The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.

6955

Abstract

Purpose

The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.

Design/methodology/approach

The paper proposed an estimation function that defines the query match score of a search result as the weighted sum of scores from a limited set of factors. The search results for a query are re-ranked according to the query match scores. The effectiveness was measured by comparing the new ranks with the original ranks of search results.

Findings

The proposed method achieved the best SEO effectiveness when using the top 20 search results for a query. The empirical results reveal that PageRank (PR) is the dominant factor in Google ranking function. The title follows as the second most important, and the snippet and the URL have roughly equal importance with variations among queries.

Research limitations/implications

This study considered a limited set of ranking factors. The empirical results reveal that SEO effectiveness can be assessed by a simple estimation of ranking function even when the ranks of the new and original result sets are quite dissimilar.

Practical implications

The findings indicate that web marketers should pay particular attention to a webpage’s PR, and then place the keyword in URL, the page title, and snippet.

Originality/value

There have been ongoing concerns about how to formulate a simple strategy that can help a website get ranked higher in search engines. This study provides web marketers much needed empirical evidence about a simple way to foresee the ranking success of an SEO effort.

Details

Online Information Review, vol. 40 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 May 2006

Alan Dawson and Val Hamilton

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such…

3177

Abstract

Purpose

This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such as Google.

Design/methodology/approach

The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines.

Findings

The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high‐quality metadata.

Originality/value

The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public‐sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.

Details

Journal of Documentation, vol. 62 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 1 February 2016

Mhamed Zineddine

– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

1382

Abstract

Purpose

The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.

Design/methodology/approach

A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.

Findings

The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.

Research limitations/implications

The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.

Practical implications

The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.

Originality/value

The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.

Details

Internet Research, vol. 26 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 21 September 2012

Jorge Martinez‐Gil and José F. Aldana‐Montes

Semantic similarity measures are very important in many computer‐related fields. Previous works on applications such as data integration, query expansion, tag refactoring or text…

Abstract

Purpose

Semantic similarity measures are very important in many computer‐related fields. Previous works on applications such as data integration, query expansion, tag refactoring or text clustering have used some semantic similarity measures in the past. Despite the usefulness of semantic similarity measures in these applications, the problem of measuring the similarity between two text expressions remains a key challenge. This paper aims to address this issue.

Design/methodology/approach

In this article, the authors propose an optimization environment to improve existing techniques that use the notion of co‐occurrence and the information available on the web to measure similarity between terms.

Findings

The experimental results using the Miller and Charles and Gracia and Mena benchmark datasets show that the proposed approach is able to outperform classic probabilistic web‐based algorithms by a wide margin.

Originality/value

This paper presents two main contributions. The authors propose a novel technique that beats classic probabilistic techniques for measuring semantic similarity between terms. This new technique consists of using not only a search engine for computing web page counts, but a smart combination of several popular web search engines. The approach is evaluated on the Miller and Charles and Gracia and Mena benchmark datasets and compared with existing probabilistic web extraction techniques.

Details

Online Information Review, vol. 36 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 3000