Search results

1 – 10 of over 90000
To view the access options for this content please click here
Book part
Publication date: 14 December 2004

Mike Thelwall

Abstract

Details

Link Analysis: An Information Science Approach
Type: Book
ISBN: 978-012088-553-4

To view the access options for this content please click here
Book part
Publication date: 14 December 2004

Mike Thelwall

Abstract

Details

Link Analysis: An Information Science Approach
Type: Book
ISBN: 978-012088-553-4

To view the access options for this content please click here
Article
Publication date: 13 October 2021

Irem Önder and Adiyukh Berbekova

The purpose of this study is to understand the status quo of the use of Web analytics tools by European destination management organizations (DMOs) and to provide…

Abstract

Purpose

The purpose of this study is to understand the status quo of the use of Web analytics tools by European destination management organizations (DMOs) and to provide guidelines in using these metrics for business intelligence and tourism design. In addition, the goal is to improve destination management at the city level using Web analytics data.

Design/methodology/approach

In this exploratory study, the authors analyze how European DMOs view Web analytics data through the lens of the “data to knowledge to results” framework. The authors analyze the use of Web analytics tools by DMOs through the theory of affordances and “data-to-knowledge framework” developed by Davenport et al., which incorporates several factors that contribute to a successful transformation of data available to an organization to knowledge, desirable results and ultimately to building an analytical capability.

Findings

The results show that European DMOs mainly use Web analytics data for website quality assurance, but that some are also using them to drive marketing programs. The study concludes by providing several suggestions for ways in which DMOs might optimize the use of Web analytics data, which will also improve the management of destinations.

Originality/value

Web analytics tools are used by many organizations such as DMOs to collect traffic data, to evaluate and optimize websites. However, these metrics can also be combined with other data such as bednights numbers and used for forecasting or other managerial decisions for destination management at the city level. There is a research gap in this area that focuses on using Web analytics data for business intelligence in the tourism industry and this research aims to fill this gap.

Details

International Journal of Tourism Cities, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2056-5607

Keywords

Content available
Article
Publication date: 22 September 2021

Helena Francke

Institutional and commercial web profiles that provide biobibliographic information about researchers are used for promotional purposes but also as information sources. In…

Abstract

Purpose

Institutional and commercial web profiles that provide biobibliographic information about researchers are used for promotional purposes but also as information sources. In the latter case, the profiles' (re)presentations of researchers may be used to assess whether a researcher can be trusted. The article introduces a conceptual framework of how trust in researchers may be formed based on how the researchers' experiences and achievements are mobilized on the profiles to tell a multifaceted story of the “self.”

Design/methodology/approach

The framework is an analytical product which draws on theories of trust as well as on previous research focused on academic web profiles and on researchers' perceptions of trust and credibility. Two dimensions of trust are identified as central to the theoretical construction of trust, namely competence and trustworthiness.

Findings

The framework outlines features of profile content and narrative that may influence the assessment of the profile and of the researcher's competence and trustworthiness. The assessment is understood as shaped by the frames of interpretation available to a particular audience.

Originality/value

The framework addresses the lack of a trust perspective in previous research about academic web profiles. It provides an analysis of how potential trust in the researcher may be formed on the profiles. An innovative contribution is the acknowledgement of both qualitative and quantitative indicators of trustworthiness and competence, including the richness of the story told about the “self.”

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 4 October 2021

Moisés Rockembach and Anabela Serrano

The purpose of this investigation is to analyze information on the web and its preservation as a digital heritage, having as object of study information about events…

Abstract

Purpose

The purpose of this investigation is to analyze information on the web and its preservation as a digital heritage, having as object of study information about events related to climate changes and the environment in Portugal and Brazil, thus contributing to an applied case of preservation of web in the Ibero-American context.

Design/methodology/approach

It is a theoretical and applied investigation and the methodology uses mixed methods, collecting and analyzing quantitative and qualitative data, from three data sources: the Internet Archive and public collections of Archive-it, the Portuguese web archive and a complementation from collections formed by the research group on web archiving and digital preservation in Brazil.

Findings

The web archiving initiatives started in 1996, however, over the years, the collections have been specializing, from nationally relevant themes, to thematic niches. The theme “climate changes” has had an impact on scientific and mainstream discussions in the 2000s, and in the years 2010 the theme becomes the focus of digital preservation of web content, as demonstrated in this study. To not preserve data can lead to a rapid loss of this information owing to the ephemerality of the web.

Originality/value

The originality of this paper is to show the relevance of preserving web content on climate changes, to demonstrate information on climate changes on the web that is currently preserved and what information would need to be preserved.

Details

Records Management Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0956-5698

Keywords

To view the access options for this content please click here
Article
Publication date: 13 September 2021

Manik Chandra and Rajdeep Niyogi

This paper aims to solve the web service selection problem using an efficient meta-heuristic algorithm. The problem of selecting a set of web services from a large-scale…

Abstract

Purpose

This paper aims to solve the web service selection problem using an efficient meta-heuristic algorithm. The problem of selecting a set of web services from a large-scale service environment (web service repository) while maintaining Quality-of-Service (QoS), is referred to as web service selection (WSS). With the explosive growth of internet services, managing and selecting the proper services (or say web service) has become a pertinent research issue.

Design/methodology/approach

In this paper, to address WSS problem, the authors propose a new modified fruit fly optimization approach, called orthogonal array-based learning in fruit fly optimizer (OL-FOA). In OL-FOA, they adopt a chaotic map to initialize the population; they add the adaptive DE/best/2mutation operator to improve the exploration capability of the fruit fly approach; and finally, to improve the efficiency of the search process (by reducing the search space), the authors use the orthogonal learning mechanism.

Findings

To test the efficiency of the proposed approach, a test suite of 2500 web services is chosen from the public repository. To establish the competitiveness of the proposed approach, it compared against four other meta-heuristic approaches (including classical as well as state-of-the-art), namely, fruit fly optimization (FOA), differential evolution (DE), modified artificial bee colony algorithm (mABC) and global-best ABC (GABC). The empirical results show that the proposed approach outperforms its counterparts in terms of response time, latency, availability and reliability.

Originality/value

In this paper, the authors have developed a population-based novel approach (OL-FOA) for the QoS aware web services selection (WSS). To justify the results, the authors compared against four other meta-heuristic approaches (including classical as well as state-of-the-art), namely, fruit fly optimization (FOA), differential evolution (DE), modified artificial bee colony algorithm (mABC) and global-best ABC (GABC) over the four QoS parameter response time, latency, availability and reliability. The authors found that the approach outperforms overall competitive approaches. To satisfy all objective simultaneously, the authors would like to extend this approach in the frame of multi-objective WSS optimization problem. Further, this is declared that this paper is not submitted to any other journal or under review.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Article
Publication date: 18 August 2021

Maria Giovanna Confetto and Claudia Covucci

For companies that intend to respond to the modern conscious consumers' needs, a great competitive advantage is played on the ability to incorporate sustainability…

Abstract

Purpose

For companies that intend to respond to the modern conscious consumers' needs, a great competitive advantage is played on the ability to incorporate sustainability messages in marketing communications. The aim of this paper is to address this important priority in the web context, building a semantic algorithm that allows content managers to evaluate the quality of sustainability web contents for search engines, considering the current semantic web development.

Design/methodology/approach

Following the Design Science (DS) methodological approach, the study develops the algorithm as an artefact capable of solving a practical problem and improving the operation of content managerial process.

Findings

The algorithm considers multiple factors of evaluation, grouped in three parameters: completeness, clarity and consistency. An applicability test of the algorithm was conducted on a sample of web pages of the Google blog on sustainability to highlight the correspondence between the established evaluation factors and those actually used by Google.

Practical implications

Studying content marketing for sustainability communication constitutes a new field of research that offers exciting opportunities. Writing sustainability contents in an effective way is a fundamental step to trigger stakeholder engagement mechanisms online. It could be a positive social engineering technique in the hands of marketers to make web users able to pursue sustainable development in their choices.

Originality/value

This is the first study that creates a theoretical connection between digital content marketing and sustainability communication focussing, especially, on the aspects of search engine optimization (SEO). The algorithm of “Sustainability-contents SEO” is the first operational software tool, with a regulatory nature, that is able to analyse the web contents, detecting the terms of the sustainability language and measuring the compliance to SEO requirements.

Details

The TQM Journal, vol. 33 no. 7
Type: Research Article
ISSN: 1754-2731

Keywords

To view the access options for this content please click here
Article
Publication date: 3 August 2021

Irvin Dongo, Yudith Cardinale, Ana Aguilera, Fabiola Martinez, Yuni Quintero, German Robayo and David Cabeza

This paper aims to perform an exhaustive revision of relevant and recent related studies, which reveals that both extraction methods are currently used to analyze…

Abstract

Purpose

This paper aims to perform an exhaustive revision of relevant and recent related studies, which reveals that both extraction methods are currently used to analyze credibility on Twitter. Thus, there is clear evidence of the need of having different options to extract different data for this purpose. Nevertheless, none of these studies perform a comparative evaluation of both extraction techniques. Moreover, the authors extend a previous comparison, which uses a recent developed framework that offers both alternates of data extraction and implements a previously proposed credibility model, by adding a qualitative evaluation and a Twitter-Application Programming Interface (API) performance analysis from different locations.

Design/methodology/approach

As one of the most popular social platforms, Twitter has been the focus of recent research aimed at analyzing the credibility of the shared information. To do so, several proposals use either Twitter API or Web scraping to extract the data to perform the analysis. Qualitative and quantitative evaluations are performed to discover the advantages and disadvantages of both extraction methods.

Findings

The study demonstrates the differences in terms of accuracy and efficiency of both extraction methods and gives relevance to much more problems related to this area to pursue true transparency and legitimacy of information on the Web.

Originality/value

Results report that some Twitter attributes cannot be retrieved by Web scraping. Both methods produce identical credibility values when a robust normalization process is applied to the text (i.e. tweet). Moreover, concerning the time performance, Web scraping is faster than Twitter API and it is more flexible in terms of obtaining data; however, Web scraping is very sensitive to website changes. Additionally, the response time of the Twitter API is proportional to the distance from the central server at San Francisco.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2005

Jeanie M. Welch

This paper aims to discuss the viability of web server statistics for library‐generated web pages as measures of public service activity. For years librarians have…

Downloads
2035

Abstract

Purpose

This paper aims to discuss the viability of web server statistics for library‐generated web pages as measures of public service activity. For years librarians have gathered, reported, and analyzed traditional measures such as reference transactions, patron visits, book and reserve item circulation, and interlibrary loan transactions. Since the advent of web‐based databases and services, some traditional usage statistics have declined. Such declines can have political and financial implications for libraries.

Design/methodology/approach

The author did a literature review, studied a suggested revision to the NISO Z39.7‐1995 Library Statistics standard that includes counting usage of library‐generated web pages, participated in a task force on web statistics, and analyzed library web site statistics at a university library.

Findings

The recommendations of a task force on reporting web page usage statistics in an academic library are discussed. The reporting of the usage of library‐generated web pages can be a useful indicator of increased patron contacts and provide a more complete picture of public service activities.

Research limitations/implications

This is a new area for library statistics, and its impact on the perceptions of libraries as sources of information in the digital age has yet to be proven.

Originality/value

This paper is useful to libraries which wish to integrate web page usage statistics into their output measures and reporting procedures.

Details

Reference Services Review, vol. 33 no. 4
Type: Research Article
ISSN: 0090-7324

Keywords

To view the access options for this content please click here
Article
Publication date: 1 September 2005

Ross Yates

The purpose of this paper is to explore both accessibility and usability and examine the inhibitors and methods to evaluate site accessibility. Design techniques which…

Downloads
4187

Abstract

Purpose

The purpose of this paper is to explore both accessibility and usability and examine the inhibitors and methods to evaluate site accessibility. Design techniques which improve end‐user access and site interactivity, demonstrated by practical examples, are also studied.

Design/methodology/approach

Assesses various web sites for accessibility and usability.

Findings

Criteria are determined by which to assess accessibility and usability of web sites.

Originality/value

Disability is an important consideration in the development of contemporary web sites. By understanding the needs of all users, not only those with disabilities, organisations may begin the process of advancing both accessibility and usability and integrating these elements into their web development strategies.

Details

Campus-Wide Information Systems, vol. 22 no. 4
Type: Research Article
ISSN: 1065-0741

Keywords

1 – 10 of over 90000