Search results

11 – 20 of 51
Article
Publication date: 27 August 2014

Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…

Abstract

Purpose

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.

Design/methodology/approach

The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.

Findings

The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.

Originality/value

D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.

Details

Program, vol. 48 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 18 May 2015

Joachim Schöpfel

– The paper aims to investigate the impact of the open access movement on the document supply of grey literature.

Abstract

Purpose

The paper aims to investigate the impact of the open access movement on the document supply of grey literature.

Design/methodology/approach

The paper is based on a comparative survey of five major scientific and technical information centres: The British Library (UK), KM (Canada), INIST-CNRS (France), KISTI (South Korea) and TIB Hannover (Germany).

Findings

The five institutions supplied less than 1.8 million supplied items in 2014, i.e. half of the activity in 2004 (−55 per cent). There were 85,000 grey documents, mainly conference proceedings and reports, i.e. 5 per cent of the overall activity, a historically low level compared to 2004 (−72 per cent). At the same time, they continue to expand their open access strategies. Just as in 2004 and 2008, these strategies are specific, and they reflect institutional and national choices rather than global approaches, with two or three common or comparable projects (PubMed Central, national repositories, attribution of DOIs to datasets, dissertations and other objects). In spite of all differences, their development reveals some common features, like budget cuts, legal barriers (copyright), focus on domestic needs and open access policies to foster dissemination and impact of research results. Document supply for corporate customers tends to become a business-to-business service, while the delivery for the public sector relies more, than before, on resource sharing and networking with academic and public libraries. Except perhaps for the TIB Hannover, the declining importance of grey literature points towards their changing role – less intermediation, less acquisition and collection development and more high-value services, more dissemination and preservation capacities designed for the scientific community needs (research excellence, open access, data management, etc.).

Originality/value

The paper is a follow-up study of two surveys published in 2006 and 2009.

Details

Interlending & Document Supply, vol. 43 no. 2
Type: Research Article
ISSN: 0264-1615

Keywords

Article
Publication date: 8 January 2018

Chunqiu Li and Shigeo Sugimoto

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…

1271

Abstract

Purpose

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.

Design/methodology/approach

The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.

Findings

Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.

Research limitations/implications

The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.

Originality/value

This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.

Article
Publication date: 16 August 2011

Beth Posner and Evan Simpson

The purpose of this paper is to communicate the Rethinking Resource Sharing Initiative's goals and activities to an international audience of librarians concerned with using best…

972

Abstract

Purpose

The purpose of this paper is to communicate the Rethinking Resource Sharing Initiative's goals and activities to an international audience of librarians concerned with using best practices and technology to make library resource sharing more responsive to user needs.

Design/methodology/approach

The paper provides a descriptive analysis explaining the Rethinking Resource Sharing Initiative's mission and the activities it employs to fulfill it.

Findings

The paper explains how the activities of the Rethinking Resource Sharing Initiative contribute to improving the delivery of library information services.

Originality/value

The paper provides examples of innovative strategies, programs and activities designed to advocate for, inspire, and enable successful resource sharing.

Details

Interlending & Document Supply, vol. 39 no. 3
Type: Research Article
ISSN: 0264-1615

Keywords

Article
Publication date: 1 September 2015

Constanze Curdt and Dirk Hoffmeister

Research data management (RDM) comprises all processes, which ensure that research data are well-organized, documented, stored, backed up, accessible, and reusable. RDM systems…

1208

Abstract

Purpose

Research data management (RDM) comprises all processes, which ensure that research data are well-organized, documented, stored, backed up, accessible, and reusable. RDM systems form the technical framework. The purpose of this paper is to present the design and implementation of a RDM system for an interdisciplinary, collaborative, long-term research project with focus on Soil-Vegetation-Atmosphere data.

Design/methodology/approach

The presented RDM system is based on a three-tier (client-server) architecture. This includes a file-based data storage, a database-based metadata storage, and a self-designed user-friendly web-interface. The system is designed in cooperation with the local computing centre, where it is also hosted. A self-designed interoperable, project-specific metadata schema ensures the accurate documentation of all data.

Findings

A RDM system has to be designed and implemented according to requirements of the project participants. General challenges and problems of RDM should be considered. Thus, a close cooperation with the scientists obtains the acceptance and usage of the system.

Originality/value

This paper provides evidence that the implementation of a RDM system in the provided and maintained infrastructure of a computing centre offers many advantages. Consequently, the designed system is independent of the project funding. In addition, access and re-use of all involved project data is ensured. A transferability of the presented approach to another interdisciplinary research project was already successful. Furthermore, the designed metadata schema can be expanded according to changing project requirements.

Details

Program: electronic library and information systems, vol. 49 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 9 November 2023

Gustavo Candela, Nele Gabriëls, Sally Chambers, Milena Dobreva, Sarah Ames, Meghan Ferriter, Neil Fitzgerald, Victor Harbo, Katrine Hofmann, Olga Holownia, Alba Irollo, Mahendra Mahey, Eileen Manchester, Thuy-An Pham, Abigail Potter and Ellen Van Keer

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part…

Abstract

Purpose

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part of the collections as data movement, suitable for computational use.

Design/methodology/approach

The checklist was built by synthesising and analysing the results of relevant research literature, articles and studies and the issues and needs obtained in an observational study. The checklist was tested and applied both as a tool for assessing a selection of digital collections made available by galleries, libraries, archives and museums (GLAM) institutions as proof of concept and as a supporting tool for creating collections as data.

Findings

Over the past few years, there has been a growing interest in making available digital collections published by GLAM organisations for computational use. Based on previous work, the authors defined a methodology to build a checklist for the publication of Collections as data. The authors’ evaluation showed several examples of applications that can be useful to encourage other institutions to publish their digital collections for computational use.

Originality/value

While some work on making available digital collections suitable for computational use exists, giving particular attention to data quality, planning and experimentation, to the best of the authors’ knowledge, none of the work to date provides an easy-to-follow and robust checklist to publish collection data sets in GLAM institutions. This checklist intends to encourage small- and medium-sized institutions to adopt the collection as data principles in daily workflows following best practices and guidelines.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 6 June 2018

Wolfgang Zenk-Möltgen, Esra Akdeniz, Alexia Katsanidou, Verena Naßhoven and Ebru Balaban

Open data and data sharing should improve transparency of research. The purpose of this paper is to investigate how different institutional and individual factors affect the data…

1511

Abstract

Purpose

Open data and data sharing should improve transparency of research. The purpose of this paper is to investigate how different institutional and individual factors affect the data sharing behavior of authors of research articles in sociology and political science.

Design/methodology/approach

Desktop research analyzed attributes of sociology and political science journals (n=262) from their websites. A second data set of articles (n=1,011; published 2012-2014) was derived from ten of the main journals (five from each discipline) and stated data sharing was examined. A survey of the authors used the Theory of Planned Behavior to examine motivations, behavioral control, and perceived norms for sharing data. Statistical tests (Spearman’s ρ, χ2) examined correlations and associations.

Findings

Although many journals have a data policy for their authors (78 percent in sociology, 44 percent in political science), only around half of the empirical articles stated that the data were available, and for only 37 percent of the articles could the data be accessed. Journals with higher impact factors, those with a stated data policy, and younger journals were more likely to offer data availability. Of the authors surveyed, 446 responded (44 percent). Statistical analysis indicated that authors’ attitudes, reported past behavior, social norms, and perceived behavioral control affected their intentions to share data.

Research limitations/implications

Less than 50 percent of the authors contacted provided responses to the survey. Results indicate that data sharing would improve if journals had explicit data sharing policies but authors also need support from other institutions (their universities, funding councils, and professional associations) to improve data management skills and infrastructures.

Originality/value

This paper builds on previous similar research in sociology and political science and explains some of the barriers to data sharing in social sciences by combining journal policies, published articles, and authors’ responses to a survey.

Article
Publication date: 24 August 2021

Nushrat Khan, Mike Thelwall and Kayvan Kousha

The purpose of this study is to explore current practices, challenges and technological needs of different data repositories.

Abstract

Purpose

The purpose of this study is to explore current practices, challenges and technological needs of different data repositories.

Design/methodology/approach

An online survey was designed for data repository managers, and contact information from the re3data, a data repository registry, was collected to disseminate the survey.

Findings

In total, 189 responses were received, including 47% discipline specific and 34% institutional data repositories. A total of 71% of the repositories reporting their software used bespoke technical frameworks, with DSpace, EPrint and Dataverse being commonly used by institutional repositories. Of repository managers, 32% reported tracking secondary data reuse while 50% would like to. Among data reuse metrics, citation counts were considered extremely important by the majority, followed by links to the data from other websites and download counts. Despite their perceived usefulness, repository managers struggle to track dataset citations. Most repository managers support dataset and metadata quality checks via librarians, subject specialists or information professionals. A lack of engagement from users and a lack of human resources are the top two challenges, and outreach is the most common motivator mentioned by repositories across all groups. Ensuring findable, accessible, interoperable and reusable (FAIR) data (49%), providing user support for research (36%) and developing best practices (29%) are the top three priorities for repository managers. The main recommendations for future repository systems are as follows: integration and interoperability between data and systems (30%), better research data management (RDM) tools (19%), tools that allow computation without downloading datasets (16%) and automated systems (16%).

Originality/value

This study identifies the current challenges and needs for improving data repository functionalities and user experiences.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2021-0204

Details

Online Information Review, vol. 46 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 25 April 2018

Eun G. Park, Gordon Burr, Victoria Slonosky, Renee Sieber and Lori Podolsky

To rescue at-risk historical scientific data stored at the McGill Observatory, the objectives of the Data Rescue Archive Weather (DRAW) project are: to build a repository; to…

Abstract

Purpose

To rescue at-risk historical scientific data stored at the McGill Observatory, the objectives of the Data Rescue Archive Weather (DRAW) project are: to build a repository; to develop a protocol to preserve the data in weather registers; and to make the data available to research communities and the public. The paper aims to discuss these issues.

Design/methodology/approach

The DRAW project adopts an open archive information system compliant model as a conceptual framework for building a digital repository. The model consists of data collection, conversion, data capture, transcription, arrangement, description, data extraction, database design and repository setup.

Findings

A climate data repository, as the final product, is set up for digital images of registers and a database is designed for data storage. The repository provides dissemination of and access to the data for researchers, information professionals and the public.

Research limitations/implications

Doing a quality check is the most important aspect of rescuing historical scientific data to ensure the accuracy, reliability and consistency of data.

Practical implications

The DRAW project shows how the use of historical scientific data has become a key element in research analysis on scientific fields, such as climatology and environmental protection.

Originality/value

The historical climate data set of the McGill Observatory is by nature unique and complex for preservation and research purposes. The management of historical scientific data is a challenge to rescue and describe as a result of its heterogeneous and non-standardized form.

Details

Journal of Documentation, vol. 74 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Content available
Article
Publication date: 18 September 2017

Adèle Paul-Hus, Nadine Desrochers, Sarah de Rijcke and Alexander D. Rushforth

2997

Abstract

Details

Aslib Journal of Information Management, vol. 69 no. 5
Type: Research Article
ISSN: 2050-3806

11 – 20 of 51