Search results

1 – 10 of over 2000
Article
Publication date: 3 August 2021

Irvin Dongo, Yudith Cardinale, Ana Aguilera, Fabiola Martinez, Yuni Quintero, German Robayo and David Cabeza

This paper aims to perform an exhaustive revision of relevant and recent related studies, which reveals that both extraction methods are currently used to analyze credibility on…

Abstract

Purpose

This paper aims to perform an exhaustive revision of relevant and recent related studies, which reveals that both extraction methods are currently used to analyze credibility on Twitter. Thus, there is clear evidence of the need of having different options to extract different data for this purpose. Nevertheless, none of these studies perform a comparative evaluation of both extraction techniques. Moreover, the authors extend a previous comparison, which uses a recent developed framework that offers both alternates of data extraction and implements a previously proposed credibility model, by adding a qualitative evaluation and a Twitter-Application Programming Interface (API) performance analysis from different locations.

Design/methodology/approach

As one of the most popular social platforms, Twitter has been the focus of recent research aimed at analyzing the credibility of the shared information. To do so, several proposals use either Twitter API or Web scraping to extract the data to perform the analysis. Qualitative and quantitative evaluations are performed to discover the advantages and disadvantages of both extraction methods.

Findings

The study demonstrates the differences in terms of accuracy and efficiency of both extraction methods and gives relevance to much more problems related to this area to pursue true transparency and legitimacy of information on the Web.

Originality/value

Results report that some Twitter attributes cannot be retrieved by Web scraping. Both methods produce identical credibility values when a robust normalization process is applied to the text (i.e. tweet). Moreover, concerning the time performance, Web scraping is faster than Twitter API and it is more flexible in terms of obtaining data; however, Web scraping is very sensitive to website changes. Additionally, the response time of the Twitter API is proportional to the distance from the central server at San Francisco.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 22 January 2018

Richard Manly Adams Jr

The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.

1443

Abstract

Purpose

The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.

Design/methodology/approach

This paper is a viewpoint that argues for the importance of APIs by identifying the shifting paradigms of libraries in the digital age. Showing that the primary function of librarians will be to share and curate digital content, the paper shows that APIs empower a librarian to do that.

Findings

The implementation of web service APIs is within the reach of librarians who are not trained as software developers. Online documentation and free courses offer sufficient training for librarians to learn these new ways of sharing and curating digital content.

Research limitations/implications

The argument of this paper depends upon an assumption of a shift in the paradigm of libraries away from collections of materials to access points of information. The need for libraries to learn APIs depends upon a new role for librarians that anecdotal evidence supports is rising.

Practical implications

By learning a few technical skills, librarians can help patrons find relevant information within a world of proliferating information sources.

Originality/value

The literature on APIs is highly technical and overwhelming for those without training in software development. This paper translates technical language for those who have not programmed before.

Details

Library Hi Tech, vol. 36 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 18 August 2022

Muhammad Sajid Nawaz, Saif Ur Rehman Khan, Shahid Hussain and Javed Iqbal

This study aims to identify the developer’s objectives, current state-of-the-art techniques, challenges and performance evaluation metrics, and presents outlines of a…

Abstract

Purpose

This study aims to identify the developer’s objectives, current state-of-the-art techniques, challenges and performance evaluation metrics, and presents outlines of a knowledge-based application programming interfaces (API) recommendation system for the developers. Moreover, the current study intends to classify current state-of-the-art techniques supporting automated API recommendations.

Design/methodology/approach

In this study, the authors have performed a systematic literature review of studies, which have been published between the years 2004–2021 to achieve the targeted research objective. Subsequently, the authors performed the analysis of 35 primary studies.

Findings

The outcomes of this study are: (1) devising a thematic taxonomy based on the identified developers’ challenges, where mashup-oriented APIs and time-consuming process are frequently encountered challenges by the developers; (2) categorizing current state-of-the-art API recommendation techniques (i.e. clustering techniques, data preprocessing techniques, similarity measurements techniques and ranking techniques); (3) designing a taxonomy based on the identified objectives, where accuracy is the most targeted objective in API recommendation context; (4) identifying a list of evaluation metrics employed to assess the performance of the proposed techniques; (5) performing a SWOT analysis on the selected studies; (6) based on the developer’s challenges, objectives and SWOT analysis, presenting outlines of a recommendation system for the developers and (7) delineating several future research dimensions in API recommendations context.

Research limitations/implications

This study provides complete guidance to the new researcher in the context of API recommendations. Also, the researcher can target these objectives (accuracy, response time, method recommendation, compatibility, user requirement-based API, automatic service recommendation and API location) in the future. Moreover, the developers can overcome the identified challenges (including mashup-oriented API, Time-consuming process, learn how to use the API, integrated problem, API method usage location and limited usage of code) in the future by proposing a framework or recommendation system. Furthermore, the classification of current state-of-the-art API recommendation techniques also helps the researchers who wish to work in the future in the context of API recommendation.

Practical implications

This study not only facilitates the researcher but also facilitates the practitioners in several ways. The current study guides the developer in minimizing the development time in terms of selecting relevant APIs rather than following traditional manual selection. Moreover, this study facilitates integrating APIs in a project. Thus, the recommendation system saves the time for developers, and increases their productivity.

Originality/value

API recommendation remains an active area of research in web and mobile-based applications development. The authors believe that this study acts as a useful tool for the interested researchers and practitioners as it will contribute to the body of knowledge in API recommendations context.

Details

Library Hi Tech, vol. 41 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 9 March 2015

Ruben Verborgh, Seth van Hooland, Aaron Straup Cope, Sebastian Chan, Erik Mannens and Rik Van de Walle

The purpose of this paper is to revisit a decade after its conception the Representational State Transfer (REST) architectural style and analyzes its relevance to address current…

Abstract

Purpose

The purpose of this paper is to revisit a decade after its conception the Representational State Transfer (REST) architectural style and analyzes its relevance to address current challenges from the Library and Information Science (LIS) discipline.

Design/methodology/approach

Conceptual aspects of REST are reviewed and a generic architecture to support REST is presented. The relevance of the architecture is demonstrated with the help of a case study based on the collection registration database of the Cooper-Hewitt National Design Museum.

Findings

The authors argue that the “resources and representations” model of REST is a sustainable way for the management of web resources in a context of constant technological evolutions.

Practical implications

When making information resources available on the web, a resource-oriented publishing model can avoid the costs associated with the creation of multiple interfaces.

Originality/value

This paper re-examines the conceptual merits of REST and translates the architecture into actionable recommendations for institutions that publish resources.

Details

Journal of Documentation, vol. 71 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 4 April 2016

Hiroki Takatsuka, Seiki Tokunaga, Sachio Saiki, Shinsuke Matsumoto and Masahide Nakamura

The purpose of this paper is to develop a facade for seamlessly using locating services and enabling easy development of an application with indoor and outdoor location…

Abstract

Purpose

The purpose of this paper is to develop a facade for seamlessly using locating services and enabling easy development of an application with indoor and outdoor location information without being aware of the difference of individual services. To achieve this purpose, in this paper, a unified locating service, called KULOCS (Kobe-University Unified LOCating Service), which horizontally integrates the heterogeneous locating services, is proposed.

Design/methodology/approach

By focusing on technology-independent elements [when], [where] and [who] in location queries, KULOCS integrates data and operations of the existing locating services. In the data integration, a method where the time representation, the locations and the namespace are consolidated by the Unix time, the location labels and the alias table, respectively, is proposed. Based on the possible combinations of the three elements, an application-neutral application programming interface (API) for the operation integration is derived.

Findings

Using KULOCS, various practical services are enabled. In addition, the experimental evaluation shows the practical feasibility by comparing cases with or without KULOCS. The result shows that KULOCS reduces the effort of application development, especially when the number of locating services becomes large.

Originality/value

KULOCS works as a seamless facade with the underlying locating services, the users and applications consume location information easily and efficiently, without knowing concrete services actually locating target objects.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 19 May 2021

Evagelos Varthis, Marios Poulos, Ilias Giarenis and Sozon Papavlasopoulos

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed…

Abstract

Purpose

This study aims to provide a system capable of static searching on a large number of unstructured texts directly on the Web domain while keeping costs to a minimum. The proposed framework is applied to the unstructured texts of Migne’s Patrologia Graeca (PG) collection, setting PG as an implementation example of the method.

Design/methodology/approach

The unstructured texts of PG have automatically transformed to a read-only not only Structured Query Language (NoSQL) database with a structure identical to that of a representational state transfer access point interface. The transformation makes it possible to execute queries and retrieve ranked results based on a specialized application of the extended Boolean model.

Findings

Using a specifically built Web-browser-based search tool, the user can quickly locate ranked relevant fragments of texts with the ability to navigate back and forth. The user can search using the initial part of words and by ignoring the diacritics of the Greek language. The performance of the search system is comparatively examined when different versions of hypertext transfer protocol (Http) are used for various network latencies and different modes of network connections. Queries using Http-2 have by far the best performance, compared to any of Http-1.1 modes.

Originality/value

The system is not limited to the case study of PG and has a generic application in the field of humanities. The expandability of the system in terms of semantic enrichment is feasible by taking into account synonyms and topics if they are available. The system’s main advantage is that it is totally static which implies important features such as simplicity, efficiency, fast response, portability, security and scalability.

Details

International Journal of Web Information Systems, vol. 17 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 18 October 2019

Dan Lou

The purpose of this paper is to identify a light and scalable augmented reality (AR) solution to enhance library collections.

Abstract

Purpose

The purpose of this paper is to identify a light and scalable augmented reality (AR) solution to enhance library collections.

Design/methodology/approach

The author first did research to identify the major obstacle in creating a scalable AR solution. Next, she explored possible workaround methods and successfully developed two prototypes that make the current Web-based AR work with ISBN barcode.

Findings

Libraries have adopted AR technology in recent years mainly by developing mobile applications for specific education or navigation programs. Yet a straight-forward AR solution to enhance a library's collection has not been seen. One of the obstacles lies in finding a scalable and painless solution to associate special AR objects with physical books. At title level, books already have their unique identifier – the ISBN number. Unfortunately, marker-based AR technology only accept two-dimensional (2-D) objects, not the one-dimensional (1-D) EAN barcode (or ISBN barcode) used by books, as markers for technical reasons. In this paper, the author shares her development of two prototypes to make the Web-based AR work with the ISBN barcode. With the prototypes, a user can simply scan the ISBN barcode on a book to retrieve related AR content.

Research limitations/implications

This paper mainly researched and experimented with Web-based AR technologies in the attempt to identify a solution that is as platform-neutral as possible, and as user-friendly as possible.

Practical implications

The light and platform-neutral AR prototypes discussed in this paper have the benefits of minimum cost on both the development side and the experience side. A library does not need to put any additional marker on any book to implement the AR. A user does not need to install any additional applications in his/her smartphone to experience the AR. The prototypes show a promising future where physical collections inside libraries can become more interactive and attractive by blurring the line of reality and virtuality.

Social implications

The paper can help initiate the discussion on applying Web-based AR technologies to library collections.

Article
Publication date: 7 November 2016

Devis Bianchini, Valeria De Antonellis and Michele Melchiori

Modern Enterprise Web Application development can exploit third-party software components, both internal and external to the enterprise, that provide access to huge and valuable…

Abstract

Purpose

Modern Enterprise Web Application development can exploit third-party software components, both internal and external to the enterprise, that provide access to huge and valuable data sets, tested by millions of users and often available as Web application programming interfaces (APIs). In this context, the developers have to select the right data services and might rely, to this purpose, on advanced techniques, based on functional and non-functional data service descriptive features. This paper focuses on this selection task where data service selection may be difficult because the developer has no control on services, and source reputation could be only partially known.

Design/methodology/approach

The proposed framework and methodology are apt to provide advanced search and ranking techniques by considering: lightweight data service descriptions, in terms of (semantic) tags and technical aspects; previously developed aggregations of data services, to use in the selection process of a service the past experiences with the services when used in similar applications; social relationships between developers (social network) and their credibility evaluations. This paper also discusses some experimental results regarding the plan to expand other experiments to check how developers feel using the approach.

Findings

In this paper, a data service selection framework that extends and specializes an existing one for Web APIs selection is presented. The revised multi-layered model for data services is discussed and proper metrics relying on it, meant for supporting the selection of data services in a context of Web application design, are introduced. Model and metrics take into account the network of social relationships between developers, to exploit them for estimating the importance that a developer assigns to other developers’ experience.

Originality/value

This research, with respect to the state of the art, focuses attention on developers’ social networks in an enterprise context, integrating the developers’ credibility assessment and implementing the social network-based data service selection on top of a rich framework based on a multi-perspective model for data services.

Details

International Journal of Web Information Systems, vol. 12 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 May 2006

Andreas Langegger, Jürgen Palkoska and Roland Wagner

The World Wide Web has undergone a rapid transition from the originally static hypertext to an ubiquitous hypermedia system. Today, the Web is not only used as a basis for…

Abstract

The World Wide Web has undergone a rapid transition from the originally static hypertext to an ubiquitous hypermedia system. Today, the Web is not only used as a basis for distributed applications (Web applications), moreover it serves as a generic architecture for autonomous applications and services. Many research work has been done regarding the modeling and engineering process of Web applications and various platforms, frameworks and development kits exist for the efficient implementation of such systems. Concerning the modeling process, many of the published concepts try to merge traditional hypermedia modeling with techniques from the software engineering domain. Unfortunately, those concepts which capture all facets of the Web’s architecture become rather bulky and are eventually not applicable for a model‐driven Web application development. Moreover, there is a need for frameworks which address both, the modeling process and the implementation task and allow a model driven, semi‐automatic engineering process using CASE tools. This paper outlines the DaVinci Web Engineering Framework which supports the modeling as well as the semi‐automated implementation of Web applications. The DaVinci Architectural Layer specifies a persistent, hierarchical GUI model and a generic interaction scheme. This allows the elimination of the hypermedia paradigm, which turned out to be rather practical when building Web applications.

Details

International Journal of Web Information Systems, vol. 2 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 December 2018

Bruno C.N. Oliveira, Alexis Huf, Ivan Luiz Salvadori and Frank Siqueira

This paper describes a software architecture that automatically adds semantic capabilities to data services. The proposed architecture, called OntoGenesis, is able to semantically…

Abstract

Purpose

This paper describes a software architecture that automatically adds semantic capabilities to data services. The proposed architecture, called OntoGenesis, is able to semantically enrich data services, so that they can dynamically provide both semantic descriptions and data representations.

Design/methodology/approach

The enrichment approach is designed to intercept the requests from data services. Therefore, a domain ontology is constructed and evolved in accordance with the syntactic representations provided by such services in order to define the data concepts. In addition, a property matching mechanism is proposed to exploit the potential data intersection observed in data service representations and external data sources so as to enhance the domain ontology with new equivalences triples. Finally, the enrichment approach is capable of deriving on demand a semantic description and data representations that link to the domain ontology concepts.

Findings

Experiments were performed using real-world datasets, such as DBpedia, GeoNames as well as open government data. The obtained results show the applicability of the proposed architecture and that it can boost the development of semantic data services. Moreover, the matching approach achieved better performance when compared with other existing approaches found in the literature.

Research limitations/implications

This work only considers services designed as data providers, i.e., services that provide an interface for accessing data sources. In addition, our approach assumes that both data services and external sources – used to enhance the domain ontology – have some potential of data intersection. Such assumption only requires that services and external sources share particular property values.

Originality/value

Unlike most of the approaches found in the literature, the architecture proposed in this paper is meant to semantically enrich data services in such way that human intervention is minimal. Furthermore, an automata-based index is also presented as a novel method that significantly improves the performance of the property matching mechanism.

Details

International Journal of Web Information Systems, vol. 15 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 2000