Search results
1 – 10 of 97This study aims to explore the character and attainment of an effective URL system by expanding the concept of URL normalization, originally connected to machine-reading access of…
Abstract
Purpose
This study aims to explore the character and attainment of an effective URL system by expanding the concept of URL normalization, originally connected to machine-reading access of web pages, to form a broader understanding of URL systematization that includes user-focused cognitive and practical elements.
Design/methodology/approach
A revised understanding of URL normalization will be used to critically analyzed URLs of main admissions pages from M1 universities, as designated by the Carnegie Foundation.
Findings
The study found that very few institutions implemented well-organized systems of Uniform Resource Locators (URLs) and redirects and that many included unintelligible and impractical URLs that would hinder the effective use of their websites.
Practical implications
A broader understanding of URL systematization will result in more effective website design. URLs must serve an indexical function pointing to a unique web resource, whatever the URL's format. However, URLs should also consider human usability issues and strive to be simple, short, communicable, intelligible and ultimately useful as part of social interactions. Poorly designed URLs create frustration, if not failure, by being difficult to use, confusing or interminable. An effective URL system should also include redirects to anticipate alternate, meaningful URLs that are different from the canonical path. The framework and recommendations arising from this study are applicable to many website structures.
Originality/value
The expanded understanding of the concept of URL normalization and subsequent evaluation principles can be used to assess the overall coherence and completeness of the website in general, thus improving website usability.
Details
Keywords
Sungin Lee, Wonhong Jang, Eunsol Lee and Sam G. Oh
The purpose of this paper is to examine the effect of, and identify core techniques of, search engine optimization (SEO) techniques applied to the web (http://lg-sl.net) and…
Abstract
Purpose
The purpose of this paper is to examine the effect of, and identify core techniques of, search engine optimization (SEO) techniques applied to the web (http://lg-sl.net) and mobile (http//m.lg-sl.net) Science Land content and services at LG Sangnam Library in Korea.
Design/methodology/approach
In accordance with three major SEO guidelines, ten SEO techniques were identified and applied, and their implications were extracted on three areas: improved search engine accessibility, increased relevance between site content and search engine keywords, and improved site credibility. The effects were quantitatively analyzed in terms of registered search engine keywords and influx of visits via search engines.
Findings
This study shows that SEO techniques help increase the exposure of the library services and the number of visitors through search engines.
Practical implications
SEO techniques have been applied to a few non-Korean information service organizations, but it is not a well-accepted practice in Korean libraries. And the dominant search engines in Korea have published their own SEO guidelines. Prior to this study, no significant endeavors have been undertaken in the context of Korean library services that have adopted SEO techniques to boost exposure of library services and increase user traffics.
Originality/value
This is the first published study that has applied optimized SEO techniques to Korean web and mobile library services, in order to demonstrate the usefulness of the techniques for maximized exposure of library content.
Details
Keywords
Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from…
Abstract
Purpose
Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities.
Design/methodology/approach
URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies.
Findings
The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business.
Research limitations/implications
The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies.
Practical implications
Should link searches be withdrawn, then link analyses of less well linked non‐academic, non‐commercial sites would be seriously weakened, although citations based on e‐mail addresses could help to make citations more numerous than links for some business and academic contexts.
Originality/value
This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.
Details
Keywords
Cheng-Jye Luh, Sheng-An Yang and Ting-Li Dean Huang
– The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.
Abstract
Purpose
The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective.
Design/methodology/approach
The paper proposed an estimation function that defines the query match score of a search result as the weighted sum of scores from a limited set of factors. The search results for a query are re-ranked according to the query match scores. The effectiveness was measured by comparing the new ranks with the original ranks of search results.
Findings
The proposed method achieved the best SEO effectiveness when using the top 20 search results for a query. The empirical results reveal that PageRank (PR) is the dominant factor in Google ranking function. The title follows as the second most important, and the snippet and the URL have roughly equal importance with variations among queries.
Research limitations/implications
This study considered a limited set of ranking factors. The empirical results reveal that SEO effectiveness can be assessed by a simple estimation of ranking function even when the ranks of the new and original result sets are quite dissimilar.
Practical implications
The findings indicate that web marketers should pay particular attention to a webpage’s PR, and then place the keyword in URL, the page title, and snippet.
Originality/value
There have been ongoing concerns about how to formulate a simple strategy that can help a website get ranked higher in search engines. This study provides web marketers much needed empirical evidence about a simple way to foresee the ranking success of an SEO effort.
Details
Keywords
Kinh Nguyen, Tharam S. Dillon and Erik Danielsen
This article proposes the concept of web clientserver event together with its associated taxonomy which yields a formal specification for such an event. The concept, in…
Abstract
This article proposes the concept of web clientserver event together with its associated taxonomy which yields a formal specification for such an event. The concept, in conjunction with the concept of atomic use case (reviewed in the article), is then used as a key element for a model‐driven approach to web information system development. The outcome is a new method for web information systems development that reduces the complex web‐based hypermedia navigation behaviour to a much simpler event‐driven behaviour. On the strength of that realized simplicity, the method provides (i) a set of platform‐independent models that completely characterizes the application, and (ii) a well‐defined process to map the combined model to any chosen platform‐dependent implementation.
Details
Keywords
Kenning Arlitsch and Patrick S. O'Brien
Google Scholar has difficulty indexing the contents of institutional repositories, and the authors hypothesize the reason is that most repositories use Dublin Core, which cannot…
Abstract
Purpose
Google Scholar has difficulty indexing the contents of institutional repositories, and the authors hypothesize the reason is that most repositories use Dublin Core, which cannot express bibliographic citation information adequately for academic papers. Google Scholar makes specific recommendations for repositories, including the use of publishing industry metadata schemas over Dublin Core. This paper aims to test a theory that transforming metadata schemas in institutional repositories will lead to increased indexing by Google Scholar.
Design/methodology/approach
The authors conducted two surveys of institutional and disciplinary repositories across the USA, using different methodologies. They also conducted three pilot projects that transformed the metadata of a subset of papers from USpace, the University of Utah's institutional repository, and examined the results of Google Scholar's explicit harvests.
Findings
Repositories that use GS recommended metadata schemas and express them in HTML meta tags experienced significantly higher indexing ratios. The ease with which search engine crawlers can navigate a repository also seems to affect indexing ratio. The second and third metadata transformation pilot projects at Utah were successful, ultimately achieving an indexing ratio of greater than 90 percent.
Research limitations/implications
The second survey is limited to 40 titles from each of seven repositories, for a total of 280 titles. A larger survey that covers more repositories may be useful.
Practical implications
Institutional repositories are achieving significant mass, and the rate of author citations from those repositories may affect university rankings. Lack of visibility in Google Scholar, however, will limit the ability of IRs to play a more significant role in those citation rates.
Social implications
Transforming metadata can be a difficult and tedious process. The Institute of Museum and Library Services has recently awarded a National Leadership Grant to the University of Utah to continue SEO research with its partner, OCLC Inc., and to develop a toolkit that will include automated transformation mechanisms.
Originality/value
Little or no research has been published about improving the indexing ratio of institutional repositories in Google Scholar. The authors believe that they are the first to address the possibility of transforming IR metadata to improve indexing ratios in Google Scholar.
Details
Keywords
John W. Fritch and Robert L. Cromwell
This paper discusses the importance of ascribing cognitive authority to Internet information, provides basic evaluative criteria for ascribing authority, and describes technical…
Abstract
This paper discusses the importance of ascribing cognitive authority to Internet information, provides basic evaluative criteria for ascribing authority, and describes technical tools for investigating authorship and conducting more advanced research. The proffered tools offer ways to investigate authorship and identity and can significantly contribute to the confidence with which a researcher can ascribe authority. Analyses of the output from technical tools directly reveal how these tools may be used to draw conclusions regarding authorship and identity. An overview of public‐key infrastructure (PKI) is provided as a possible solution to the problem of determining identity in a networked environment.
Details
Keywords
Marzieh Yari Zanganeh and Nadjla Hariri
The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.
Abstract
Purpose
The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.
Design/methodology/approach
From the methodological perspective, the present study is experimental and the type of study is practical. The study population is PhD students of various fields of science. The study sample consists of 50 students as selected by the stratified purposive sampling method. The information aggregation is performed by observing the records of user’s facial expressions, log file by Morae software, as well as pre-search and post-search questionnaire. The data analysis is performed by canonical correlation analysis.
Findings
The findings showed that there was a significant relationship between emotional expressions and searchers’ individual characteristics. Searchers satisfaction of results, frequency internet search, experience of search, interest in the search task and familiarity with similar searches were correlated with the increased happy emotion. The examination of user’s emotions during searching performance showed that users with happiness emotion dedicated much time in searching and viewing of search solutions. More internet addresses with more queries were used by happy participants; on the other hand, users with anger and disgust emotions had the lowest attempt in search performance to complete search process.
Practical implications
The results imply that the information retrieval systems in the web should identify emotional expressions in a set of perceiving signs in human interaction with computer, similarity, face emotional states, searching and information retrieval from the web.
Originality/value
The results explicit in the automatic identification of users’ emotional expressions can enter new dimensions into their moderator and information retrieval systems on the web and can pave the way of design of emotional information retrieval systems for the successful retrieval of users of the network.
Details
Keywords
Erik Borra and Bernhard Rieder
The purpose of this paper is to introduce Digital Methods Initiative Twitter Capture and Analysis Toolset, a toolset for capturing and analyzing Twitter data. Instead of just…
Abstract
Purpose
The purpose of this paper is to introduce Digital Methods Initiative Twitter Capture and Analysis Toolset, a toolset for capturing and analyzing Twitter data. Instead of just presenting a technical paper detailing the system, however, the authors argue that the type of data used for, as well as the methods encoded in, computational systems have epistemological repercussions for research. The authors thus aim at situating the development of the toolset in relation to methodological debates in the social sciences and humanities.
Design/methodology/approach
The authors review the possibilities and limitations of existing approaches to capture and analyze Twitter data in order to address the various ways in which computational systems frame research. The authors then introduce the open-source toolset and put forward an approach that embraces methodological diversity and epistemological plurality.
Findings
The authors find that design decisions and more general methodological reasoning can and should go hand in hand when building tools for computational social science or digital humanities.
Practical implications
Besides methodological transparency, the software provides robust and reproducible data capture and analysis, and interlinks with existing analytical software. Epistemic plurality is emphasized by taking into account how Twitter structures information, by allowing for a number of different sampling techniques, by enabling a variety of analytical approaches or paradigms, and by facilitating work at the micro, meso, and macro levels.
Originality/value
The paper opens up critical debate by connecting tool design to fundamental interrogations of methodology and its repercussions for the production of knowledge. The design of the software is inspired by exchanges and debates with scholars from a variety of disciplines and the attempt to propose a flexible and extensible tool that accommodates a wide array of methodological approaches is directly motivated by the desire to keep computational work open for various epistemic sensibilities.
Details