Search results
21 – 30 of over 14000Aurélie Gandour and Amanda Regolini
Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in page”…
Abstract
Purpose
Search Engine Optimization (SEO) is a set of techniques used by websites in order to be better indexed by search engines. This papers aims to focus upon “white hat”, “in page” SEO: techniques to improve one's site content, hereby making it more attractive to human visitors as well as search engines, by making changes within the site's pages while focusing on chosen themes and keywords. The final goal is for the site to be better ranked by one or several targeted search engines and therefore appearing higher in their results lists for specified requests. This paper seeks to describe the steps one must take to reach such a goal, while focusing on the example of the website Fragfornet.
Design/methodology/approach
Fragfornet web pages have been generated through a “website factory” allowing the creation of dynamic websites on demand for the employees of Cemagref. This explains the steps to take to optimize for search engines any website using Zope Plone; even more broadly, the general recommendations described can be used by any website at all to gain more visibility on search engines. After a literature review about search engine optimization, the paper describes the methods used to optimize the website before exposing the results which were quickly obtained.
Findings
It was not long before the first effects of the SEO campaign were experienced. One week later, as soon as the Googlebots had crawled the site and stored a newer version of it within their databases, it immediately went up in the results pages for requests concerning forests fragmentation. This paper describes some of the parameters that were monitored and some of the conclusions drawn from them..
Originality/value
This paper's goal is to explain which steps to take for search engines to optimize any website elaborated through Cemagref website factory, or any website using Zope Plone. Even more broadly, the general recommendations described in this paper can be used by any librarian on any website to gain more visibility on search engines.
Details
Keywords
Maryam Tavosi and Nader Naghshineh
This study aims to present a comparative study of university library websites (in the USA) from the standpoint of “Google SEO” and “Accessibility”. Furthermore, correlation…
Abstract
Purpose
This study aims to present a comparative study of university library websites (in the USA) from the standpoint of “Google SEO” and “Accessibility”. Furthermore, correlation analysis between these two done.
Design/methodology/approach
By opting for a webometric approach, the present study analyzed university library websites in the USA. The Lighthouse add-on for the Google Chrome browser has been used as a data collection tool, by writing and implementing a computer program in Bash language automatically (May 2020). Data analysis tools used were “Libre-Office-Calc”, “SPSS22” and “Excel”.
Findings
In all 81 university library websites in the USA, Google search engine optimization (SEO) scores have been observed the amount higher than 60 (Total Score = 100). The accessibility rank obtained lay between 0.56 and 1 (Total Score = 1). A weak correlation relationship between “SEO score” and “accessibility rank” (P-value = 0.02, Spearman Correlation Coefficient = 0.345) was observed. This weak relationship can be explained due to the impact of several components affecting Google’s SEO score, one of them being having a high “accessibility rank”.
Practical implications
Given the increasing automation of library processes, SEO tools can help libraries in achieving their digital marketing goals.
Originality/value
Accurate measurement of the Google SEO score and accessibility rank for the university library websites (in the USA) were obtained by Lighthouse add-on for Google Chrome browser. Moreover, data extraction by the implementation of one program computer without the direct observation of human resources is the innovation of this study.
Details
Keywords
Carlos Lopezosa, Dimitrios Giomelakis, Leyberson Pedrosa and Lluís Codina
This paper constitutes the first academic study to be made of Google Discover as applied to online journalism.
Abstract
Purpose
This paper constitutes the first academic study to be made of Google Discover as applied to online journalism.
Design/methodology/approach
This paper constitutes the first academic study to be made of Google Discover as applied to online journalism. The study involved conducting 61 semi-structured interviews with experts that are representative of a range of different professional profiles within the fields of journalism and search engine positioning (SEO) in Brazil, Spain and Greece. Based on the data collected, the authors created five semantic categories and compared the experts' perceptions in order to detect common response patterns.
Findings
This study results confirm the existence of different degrees of convergence and divergence in the opinions expressed in these three countries regarding the main dimensions of Google Discover, including specific strategies using the feed, its impact on web traffic, its impact on both quality and sensationalist content and on the degree of responsibility shown by the digital media in its use. The authors are also able to propose a set of best practices that journalists and digital media in-house web visibility teams should take into account to increase their probability of appearing in Google Discover. To this end, the authors consider strategies in the following areas of application: topics, different aspects of publication, elements of user experience, strategic analysis and diffusion and marketing.
Originality/value
Although research exists on the application of SEO to different areas, there have not, to date, been any studies examining Google Discover.
Peer review
The peer-review history for this article is available at: https://publons.com/publon/10.1108/OIR-10-2022-0574
Details
Keywords
The purpose of this paper is to investigate the search behavior of institutional repository (IR) users in regard to subjects as a means of estimating the potential impact of…
Abstract
Purpose
The purpose of this paper is to investigate the search behavior of institutional repository (IR) users in regard to subjects as a means of estimating the potential impact of applying a controlled subject vocabulary to an IR.
Design/methodology/approach
Google Analytics data were used to record cases where users arrived at an IR item page from an external web search and subsequently downloaded content. Search queries were compared against the Faceted Application of Subject Terminology (FAST) schema to determine the topical nature of the queries. Queries were also compared against the item’s metadata values for title and subject using approximate string matching to determine the alignment of the queries with current metadata values.
Findings
A substantial portion of successful user search queries to an IR appear to be topical in nature. User search queries matched values from FAST at a higher rate than existing subject metadata. Increased attention to subject description in IR records may provide an opportunity to improve the search visibility of the content.
Research limitations/implications
The study is limited to a particular IR. Data from Google Analytics does not provide comprehensive search query data.
Originality/value
The study presents a novel method for analyzing user search behavior to assist IR managers in determining whether to invest in applying controlled subject vocabularies to IR content.
Details
Keywords
Sarah E. Crudge and Frances C. Johnson
The purpose of this research is to explore a method for the determination of users' representations of search engines, formed during their interaction with these systems…
Abstract
Purpose
The purpose of this research is to explore a method for the determination of users' representations of search engines, formed during their interaction with these systems. Determines the extent to which these elicited “mental models” indicate the system aspects of importance to the user and from this their evaluative view of these tools.
Design/methodology/approach
The repertory grid technique is used to elicit a set of constructs that define facets within the mental model of an individual. A related technique of laddering then considers each of the user's constructs to determine the reasons for its importance within the user's mental model.
Findings
The model derived from the qualitative data comprises three hierarchical strata and conveys the interrelations between basic system description, evaluative description, and the key evaluations of ease, efficiency, effort and effectiveness. Two additional layers relating to the perceived process and the experience of emotion are also discussed.
Research limitations/implications
Ten participants is considered to be optimum for obtaining constructs in a repertory grid, but limits the findings to the context of the user group and the systems used in this study.
Originality/value
The methodology has not previously been used to determine mental models of search engines and from these to understand users' evaluative view of systems. The resulting model of key evaluations with the conjunctions of procedural elements suggests a framework for further research to evaluate search engines from the user perspective.
Details
Keywords
David Nicholas, Paul Huntington, Peter Williams and Tom Dobrowolski
Collating data from a number of log and questionnaire studies conducted largely into the use of a range of consumer health digital information platforms, Centre for Information…
Abstract
Collating data from a number of log and questionnaire studies conducted largely into the use of a range of consumer health digital information platforms, Centre for Information Behaviour and the Evaluation of Research (Ciber) researchers describe some new thoughts on characterising (and naming) information seeking behaviour in the digital environment, and in so doing, suggest a new typology of digital users. The characteristic behaviour found is one of bouncing in which users seldom penetrate a site to any depth, tend to visit a number of sites for any given information need and seldom return to sites they once visited. They tend to “feed” for information horizontally, and whether they search a site of not depends heavily on “digital visibility”, which in turn creates all the conditions for “bouncing”. The question whether this type of information seeking represents a form of “dumbing down or up”, and what it all means for publishers, librarians and information providers, who might be working on other, possible outdated usage paradigms, is discussed.
Details
Keywords
A. Hossein Farajpahlou and Faeze Tabatabai
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information…
Abstract
Purpose
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information environments by general search engines such as Google and Yahoo!
Design/methodology/approach
In total, 100 XML content objects were divided into two groups: those with DCXML elements and those with MARCXML elements. Both groups were published on the web site www.marcdcmi.ir in late July 2009 and were online until June 2010. The web site was introduced to Google and Yahoo! search engines. The indexing quality of metadata elements embedded in the content objects in a dynamic online information environment and their indexing and ranking capabilities were compared and examined.
Findings
Google search engine was able to retrieve fully all the content objects through their Dublin Core and MARC 21 metadata elements; Yahoo! search engine, however, did not respond at all. Results of the study showed that all Dublin Core and MARC 21 metadata elements were indexed by Google search engine. No difference was observed between indexing quality and ranking of DCXML metadata elements with that of MARCXML. The results of the study revealed that neither the XML‐based Dublin Core Metadata Initiative nor MARC 21 demonstrate any preference regarding access in dynamic online information environments through Google search engine.
Practical implications
The findings can provide useful information for search engine designers.
Originality/value
The present study was conducted for the first time in dynamic environments using XML‐based metadata elements. It can provide grounds for further studies of this kind.
Details
Keywords
This paper aims to introduce a case of search engine optimization (SEO), especially designed for a national scholarly open access information website in the field of STEM.
Abstract
Purpose
This paper aims to introduce a case of search engine optimization (SEO), especially designed for a national scholarly open access information website in the field of STEM.
Design/methodology/approach
Korea Institute of Science and Technology Information (KISTI) collaborated with the Google Scholar team to open and share the research outcomes of STEM in Korea worldwide. KoreaScience is a reference-linking platform for open access scientific and technical journals in Korea, operated by KISTI. KISTI worked with the Google Scholar team to embed machine-readable bibliographic metadata into its journal pages and to create an XML Sitemap to help Google find pages on KoreaScience.
Findings
As a result of implementation of metadata and creation of an XML Sitemap, the KoreaScience Web pages have noticeably increased the relevance of a search results’ list on Google and Google Scholar. In addition to this, the KoreaScience platform has received an increasing amount of its traffic from around the world.
Originality/value
Not much research has sought to understand SEO in the aspect of users and how it may be facilitated in “visible” academic Web environments such as search systems and open access information systems. For this project, the motivation for investigating SEO comes from its association with positive outcomes that range from personal benefits to global rewards, e.g. increased satisfaction in search user experience and, further, academic progress and scientific development by sharing and accessing scientific knowledge in the fast-growing field of STEM.
Details
Keywords
Herbert Zuze and Melius Weideman
The purpose of this research project was to determine how the three biggest search engines interpret keyword stuffing as a negative design element.
Abstract
Purpose
The purpose of this research project was to determine how the three biggest search engines interpret keyword stuffing as a negative design element.
Design/methodology/approach
This research was based on triangulation between scholar reporting, search engine claims, SEO practitioners and empirical evidence on the interpretation of keyword stuffing. Five websites with varying keyword densities were designed and submitted to Google, Yahoo! and Bing. Two phases of the experiment were done and the response of the search engines was recorded.
Findings
Scholars have indicated different views in respect of spamdexing, characterised by different keyword density measurements in the body text of a webpage. During both phases, almost all the test webpages, including the one with a 97.3 per cent keyword density, were indexed.
Research limitations/implications
Only the three biggest search engines were considered, and monitoring was done for a set time only. The claims that high keyword densities will lead to blacklisting have been refuted.
Originality/value
Websites should be designed with high quality, well‐written content. Even though keyword stuffing is unlikely to lead to search engine penalties, it could deter human visitors and reduce website value.
Details
Keywords
This paper aims to investigate the multiple language support features in internet search engines. The diversity of the internet is reflected not only in its users, information…
Abstract
Purpose
This paper aims to investigate the multiple language support features in internet search engines. The diversity of the internet is reflected not only in its users, information formats and information content, but also in the languages used. As more and more information becomes available in different languages, multiple language support in a search engine becomes more important.
Design/methodology/approach
The first step of this study is to conduct a survey about existing search engines and to identify search engines with multiple language support features. The second step is to analyse, compare, and characterise the multiple language support features in the selected search engines against the proposed five basic evaluation criteria after they are classified into three categories. Finally, the strengths and weaknesses of the multiple language support features in the selected search engines are discussed in detail.
Findings
The findings reveal that Google, EZ2Find, and Onlinelink respectively are the search engines with the best multiple language support features in their categories. Although many search engines are equipped with multiple language support features, an indispensable translation feature is implemented in only a few search engines. Multiple language support features in search engines remain at the lexical level.
Originality/value
The findings of the study will facilitate understanding of the current status of multiple language support in search engines, help users to effectively utilise multiple language support features in a search engine, and provide useful advice and suggestions for search engine researchers, designers and developers.
Details