Search results

1 – 10 of over 10000
Article
Publication date: 9 March 2015

Silvio Peroni, Alexander Dutton, Tanya Gray and David Shotton

Citation data needs to be recognised as a part of the Commons – those works that are freely and legally available for sharing – and placed in an open repository. The paper aims to…

1438

Abstract

Purpose

Citation data needs to be recognised as a part of the Commons – those works that are freely and legally available for sharing – and placed in an open repository. The paper aims to discuss this issue.

Design/methodology/approach

The Open Citation Corpus is a new open repository of scholarly citation data, made available under a Creative Commons CC0 1.0 public domain dedication and encoded as Open Linked Data using the SPAR Ontologies.

Findings

The Open Citation Corpus presently provides open access (OA) to reference lists from 204,637 articles from the OA Subset of PubMed Central, containing 6,325,178 individual references to 3,373,961 unique papers.

Originality/value

Scholars, publishers and institutions may freely build upon, enhance and reuse the open citation data for any purpose, without restriction under copyright or database law.

Details

Journal of Documentation, vol. 71 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 4 March 2014

Stefan Gradmann

The aim of this paper is to reposition the research library in the context of the changing information and knowledge architecture at the end of the “Gutenberg Parenthesis” and as…

1713

Abstract

Purpose

The aim of this paper is to reposition the research library in the context of the changing information and knowledge architecture at the end of the “Gutenberg Parenthesis” and as part of the rapidly emerging “semantic” environment of the Linked Open Data paradigm. Understanding this process requires a good understanding of the evolution of the “document” notion in the passage from print based culture to the distributed hypertextual and RDF based information architecture of the WWW.

Design/methodology/approach

These objectives are reached using literature study and a descriptive historical approach as well as text mining techniques using Google nGrams as a data source.

Findings

The paper presents a proposal for effectively repositioning research libraries in the context of eScience and eScholarship as well as clear indications of the proposed repositioning already taking place. Furthermore, a new perspective of the “document” notion is provided.

Practical implications

The evolution described in the contribution creates opportunities for libraries to reposition themselves as aggregators and selectors of content and as contextualising agents as part of future Linked Data based scholarly research environments provided they are able and ready to operate the related cultural changes.

Originality/value

The paper will be useful for practitioners in search of strategic guidance for repositioning their librarian institutions in a context of ever increasing competition for scarce funding resources.

Details

Journal of Documentation, vol. 70 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 9 September 2014

Josep Maria Brunetti and Roberto García

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but…

Abstract

Purpose

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues.

Design/methodology/approach

The Visual Information-Seeking Mantra “Overview first, zoom and filter, then details-on-demand” proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users.

Findings

The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs.

Originality/value

Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.

Details

Aslib Journal of Information Management, vol. 66 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 23 September 2014

Tom P. Abeles

The purpose of this article is to present a viewpoint on the future of academic publishing. It is important for a traditional peer-reviewed academic journal that is focused on the…

423

Abstract

Purpose

The purpose of this article is to present a viewpoint on the future of academic publishing. It is important for a traditional peer-reviewed academic journal that is focused on the future, particularly of post-secondary education, to be sensitive to the waters in which it swims and to sense how the climate is changing within the journal area and education as a whole.

Design/methodology/approach

This is a viewpoint on the future of academic publishing.

Findings

The rapid development of the Internet and the semantic Web is showing that: The traditional double-blind peer review process is changing to a variety of processes from both pre- and post-review to open reviews; open access is firmly established and growing; there is a shift in promotion/tenure towards more emphasis on teaching; the semantic Web is introducing changes in the impact value of journals in research and education, including the function of the institutions themselves.

Social implications

Islands of concentrated knowledge locked in Ivory Towers are now readily accessible, broadly changing how individuals gain and improve competencies and use of increasing, evolving knowledge bases.

Originality/value

This article discusses the following: There is a growing alternative to the hegemony of the traditional publishers of journals even with the moderate response to open access. Basic knowledge as offered in institutions is becoming a commodity, the cost of which is asymptotically approaching zero; “Big Data” and the semantic engines on the Internet are amplifying the human capabilities of accessing, parsing and rapidly evaluating an increasing knowledge base, impacting research and education.

Article
Publication date: 16 December 2019

Tsvetanka Georgieva-Trifonova, Kaloyan Zdravkov and Donika Valcheva

The purpose of this paper is to summarize the current state of the existing research on the application of semantic technologies in bibliographic databases by providing answers to…

Abstract

Purpose

The purpose of this paper is to summarize the current state of the existing research on the application of semantic technologies in bibliographic databases by providing answers to a set of research questions resulting from a systematic literature review.

Design/methodology/approach

The present study consists of conducting a systematic literature review of research works related to the application of semantic technologies in bibliographic databases. A manual keyword search is performed in known academic databases. As a result, a total of 78 literature sources are identified as related to the topic and included in the review. From the selected literature sources, information is extracted, which is then summarized and analyzed according to previously defined research questions and finally reported. Besides, a framework is defined to classify literature sources found and collected as a result of the study. The main criteria, according to which the classification is performed, are the used semantic technology and the research problem for which semantic technologies are applied in bibliographic databases. The classification of the publications is verified by each author independently of others.

Findings

The conducted systematic scientific review establishes that the evolution of semantic technologies sets a period of increased interest in the researchers, as a result of which the advantages of using them for bibliographic descriptions are examined and practically confirmed. After defining semantic models for bibliographic descriptions and approaches to transform existing bibliographic data into their correspondence, the research interest is directed at their comparison, collation; enrichment to facilitate search and retrieval of useful information. Possible perspectives for future research are outlined, which mainly relate to the complete use of the created data sets and their transformation into knowledge repositories.

Originality/value

Despite the increasing importance of the semantic technologies in various areas, including the bibliographic databases, there is a lack of comprehensive literature review and classification of literature sources relevant to this topic. The detailed study proposed in the present paper supports introducing with the existing experience in the application of semantic technologies in bibliographic databases, as well as facilitates the discovery of trends and guidelines for future research.

Details

The Electronic Library , vol. 38 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 20 June 2016

Götz Hatop

The academic tradition of adding a reference section with references to cited and otherwise related academic material to an article provides a natural starting point for finding…

Abstract

Purpose

The academic tradition of adding a reference section with references to cited and otherwise related academic material to an article provides a natural starting point for finding links to other publications. These links can then be published as linked data. Natural language processing technologies are available today that can perform the task of bibliographical reference extraction from text. Publishing references by the means of semantic web technologies is a prerequisite for a broader study and analysis of citations and thus can help to improve academic communication in a general sense. The paper aims to discuss these issues.

Design/methodology/approach

This paper examines the overall workflow required to extract, analyze and semantically publish bibliographical references within an Institutional Repository with the help of open source software components.

Findings

A publication infrastructure where references are available for software agents would enable additional benefits like citation analysis, e.g. the collection of citations of a known paper and the investigation of citation sentiment.The publication of reference information as demonstrated in this article is possible with existing semantic web technologies based on established ontologies and open source software components.

Research limitations/implications

Only a limited number of metadata extraction programs have been considered for performance evaluation and reference extraction was tested for journal articles only, whereas Institutional Repositories usually do contain a large number of other material like monographs. Also, citation analysis is in an experimental state and citation sentiment is currently not published at all. For future work, the problem of distributing reference information between repositories is an important problem that needs to be tackled.

Originality/value

Publishing reference information as linked data are new within the academic publishing domain.

Details

Library Hi Tech, vol. 34 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 19 July 2013

Leonardo Lezcano, Salvador Sánchez‐Alonso and Antonio J. Roa‐Valverde

The purpose of this paper is to provide a literature review of the principal formats and frameworks that have been used in the last 20 years to exchange linguistic resources. It…

Abstract

Purpose

The purpose of this paper is to provide a literature review of the principal formats and frameworks that have been used in the last 20 years to exchange linguistic resources. It aims to give special attention to the most recent approaches to publishing linguistic linked open data on the Web.

Design/methodology/approach

Research papers published since 1990 on the use of various formats, standards, frameworks and methods to exchange linguistic information were divided into two main categories: those proposing specific schemas and syntaxes to suit the requirements of a given type of linguistic data (these are referred to as offline approaches), and those adopting the linked data (LD) initiative and the semantic web technologies to support the interoperability of heterogeneous linguistic resources. For each paper, the type of linguistic resource exchanged, the framework/format used, the interoperability approach taken and the related projects were identified.

Findings

The information gathered in the survey reflects an increase in recent years in approaches adopting the LD initiative. This is due to the fact that the structural and syntactic issues which arise when addressing the interoperability of linguistic resources can be solved by applying semantic web technologies. What remains an open issue in the field of computational linguistics is the development of knowledge artefacts and mechanisms to support the alignment of the different aspects of linguistic resources in order to guarantee semantic and conceptual interoperability in the linked open data (LOD) cloud. Ontologies have proved to be of great use in achieving this goal.

Research limitations/implications

The research presented here is by no means a comprehensive or all‐inclusive survey of all existing approaches to the exchange of linguistic resources. Rather, the aim was to highlight, analyze and categorize the most significant advances in the field.

Practical implications

This survey has practical implications for computational linguists and for every application requiring new developments in natural language processing. In addition, multilingual issues can be better addressed when semantic interoperability of heterogeneous linguistic resources is achieved.

Originality/value

The paper provides a survey of past and present research and developments addressing the interoperability of linguistic resources, including those where the linked data initiative has been adopted.

Book part
Publication date: 4 October 2012

David Stuart

Purpose – To investigate the potential of the semantic web as a source of information about social networks within academia, as well as more widely for webometric…

Abstract

Purpose – To investigate the potential of the semantic web as a source of information about social networks within academia, as well as more widely for webometric investigations.

Methodology – The functionality of five semantic search engines were analyzed to determine their suitability for webometric investigations, with the most suitable, Sindice.com, then being used to investigate the use of Friend of a Friend (FOAF) within UK academic web space.

Findings – In comparison to the web of documents, the semantic web is still a small part of online content. Even the well-established FOAF social vocabulary was not found on the majority of academic web sites, let alone being found to represent the majority of academics, and provided little indication of social networks between institutions. Nonetheless from a webometric perspective the study does show the potential of a semantic web for a far wider range of webometric investigations, and demonstrates that, unlike the traditional web, there are currently useful tools available.

Implications – Having established that there are appropriate tools available for webometric investigations of the semantic web, and acknowledging the potential of the semantic web for far more detailed webometric investigations, there is a need for additional studies to determine the specific strengths and limitations of the tools that are available, and investigate those areas where webometric investigations can provide the most useful insights.

Originality/value – The research applies established webometric methodologies to the social semantic web, demonstrating the potential of a whole new area for future webometric investigation.

Details

Social Information Research
Type: Book
ISBN: 978-1-78052-833-5

Keywords

Article
Publication date: 4 August 2020

Junzhi Jia

The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data…

Abstract

Purpose

The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions.

Design/methodology/approach

This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions.

Findings

Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data.

Originality/value

This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.

Details

Journal of Documentation, vol. 77 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 12 February 2018

Danila Feitosa, Diego Dermeval, Thiago Ávila, Ig Ibert Bittencourt, Bernadette Farias Lóscio and Seiji Isotani

Data providers have been increasingly publishing content as linked data (LD) on the Web. This process includes guidelines (i.e. good practices) to publish, share, and connect data…

Abstract

Purpose

Data providers have been increasingly publishing content as linked data (LD) on the Web. This process includes guidelines (i.e. good practices) to publish, share, and connect data on the Web. Several people in different areas, for instance, sciences, medicine, governments and so on, use these practices to publish data. The LD community has been proposing many practices to aid the publication of data on the Web. However, discovering these practices is a costly and time-consuming task, considering the practices that are produced by the literature. Moreover, the community still lacks a comprehensive understanding of how these practices are used for publishing LD. Thus, the purpose of this paper is to investigate and better understand how best practices support the publication of LD as well as identifying to what extent they have been applied to this field.

Design/methodology/approach

The authors conducted a systematic literature review to identify the primary studies that propose best practices to address the publication of LD, following a predefined review protocol. The authors then identified the motivations for recommending best practices for publishing LD and looked for evidence of the benefits of using such practices. The authors also examined the data formats and areas addressed by the studies as well as the institutions that have been publishing LD.

Findings

In summary, the main findings of this work are: there is empirical evidence of the benefits of using best practices for publishing LD, especially for defining standard practices, integrability and uniformity of LD; most of the studies used RDF as data format; there are many areas interested in dissemination data in a connected way; and there is a great variety of institutions that have published data on the Web.

Originality/value

The results presented in this systematic review can be very useful to the semantic web and LD community, since it gathers pieces of evidence from the primary studies included in the review, forming a body of knowledge regarding the use best practices for publishing LD pointing out interesting opportunities for future research.

Details

Online Information Review, vol. 42 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 10000