Search results

1 – 10 of over 5000
Article
Publication date: 24 January 2023

Li Si, Li Liu and Yi He

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a…

Abstract

Purpose

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a theoretical basis for the improvement and optimization of the policy system.

Design/methodology/approach

China's scientific data management policies were obtained through various channels such as searching government websites and policy and legal database, and 209 policies were finally identified as the sample for analysis after being screened and integrated. A three-dimensional framework was constructed based on the perspective of policy tools, combining stakeholder and lifecycle theories. And the content of policy texts was coded and quantitatively analyzed according to this framework.

Findings

China's scientific data management policies can be divided into four stages according to the time sequence: infancy, preliminary exploration, comprehensive promotion and key implementation. The policies use a combination of three types of policy tools: supply-side, environmental-side and demand-side, involving multiple stakeholders and covering all stages of the lifecycle. But policy tools and their application to stakeholders and lifecycle stages are imbalanced. The development of future scientific data management policy should strengthen the balance of policy tools, promote the participation of multiple subjects and focus on the supervision of the whole lifecycle.

Originality/value

This paper constructs a three-dimensional analytical framework and uses content analysis to quantitatively analyze scientific data management policy texts, extending the research perspective and research content in the field of scientific data management. The study identifies policy focuses and proposes several strategies that will help optimize the scientific data management policy.

Details

Aslib Journal of Information Management, vol. 76 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 14 July 2022

Chunlai Yan, Hongxia Li, Ruihui Pu, Jirawan Deeprasert and Nuttapong Jotikasthira

This study aims to provide a systematic and complete knowledge map for use by researchers working in the field of research data. Additionally, the aim is to help them quickly…

1680

Abstract

Purpose

This study aims to provide a systematic and complete knowledge map for use by researchers working in the field of research data. Additionally, the aim is to help them quickly understand the authors' collaboration characteristics, institutional collaboration characteristics, trending research topics, evolutionary trends and research frontiers of scholars from the perspective of library informatics.

Design/methodology/approach

The authors adopt the bibliometric method, and with the help of bibliometric analysis software CiteSpace and VOSviewer, quantitatively analyze the retrieved literature data. The analysis results are presented in the form of tables and visualization maps in this paper.

Findings

The research results from this study show that collaboration between scholars and institutions is weak. It also identified the current hotspots in the field of research data, these being: data literacy education, research data sharing, data integration management and joint library cataloguing and data research support services, among others. The important dimensions to consider for future research are the library's participation in a trans-organizational and trans-stage integration of research data, functional improvement of a research data sharing platform, practice of data literacy education methods and models, and improvement of research data service quality.

Originality/value

Previous literature reviews on research data are qualitative studies, while few are quantitative studies. Therefore, this paper uses quantitative research methods, such as bibliometrics, data mining and knowledge map, to reveal the research progress and trend systematically and intuitively on the research data topic based on published literature, and to provide a reference for the further study of this topic in the future.

Details

Library Hi Tech, vol. 42 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 13 March 2024

Mpilo Siphamandla Mthembu and Dennis N. Ocholla

In today's global and competitive corporate environment characterised by rapidly changing information, knowledge and technology (IKT), researchers must be upskilled in all aspects…

Abstract

Purpose

In today's global and competitive corporate environment characterised by rapidly changing information, knowledge and technology (IKT), researchers must be upskilled in all aspects of research data management (RDM). This study investigates a set of capabilities and competencies required by researchers at selected South African public universities, using the community capability model framework (CCMF) in conjunction with the digital curation centre (DCC) lifecycle model.

Design/methodology/approach

The post-positivist paradigm was used in the study, which used both qualitative and quantitative methodologies. Case studies, both qualitative and quantitative, were used as research methods. Because of the COVID-19 pandemic rules and regulations, semi-structured interviews with 23 study participants were conducted online via Microsoft Teams to collect qualitative data, and questionnaires were converted into Google Forms and emailed to 30 National Research Foundation (NRF)-rated researchers to collect quantitative data.

Findings

Participating institutions are still in the initial stages of providing RDM services. Most researchers are unaware of how long their institutions retain research data, and they store and backup their research data on personal computers, emails and external storage devices. Data management, research methodology, data curation, metadata skills and technical skills are critically important RDM competency requirements for both staff and researchers. Adequate infrastructure, as well as human resources and capital, are in short supply. There are no specific capacity-building programmes or strategies for developing RDM skills at the moment, and a lack of data curation skills is a major challenge in providing RDM.

Practical implications

The findings of the study can be applied widely in research, teaching and learning. Furthermore, the research could help shape RDM strategy and policy in South Africa and elsewhere.

Originality/value

The scope, subject matter and application of this study contribute to its originality and novelty.

Details

Library Management, vol. 45 no. 3/4
Type: Research Article
ISSN: 0143-5124

Keywords

Article
Publication date: 25 March 2024

Yusuf Ayodeji Ajani, Emmanuel Kolawole Adefila, Shuaib Agboola Olarongbe, Rexwhite Tega Enakrire and Nafisa Rabiu

This study aims to examine Big Data and the management of libraries in the era of the Fourth Industrial Revolution and its implications for policymakers in Nigeria.

Abstract

Purpose

This study aims to examine Big Data and the management of libraries in the era of the Fourth Industrial Revolution and its implications for policymakers in Nigeria.

Design/methodology/approach

A qualitative methodology was used, involving the administration of open-ended questionnaires to librarians from six selected federal universities located in Southwest Nigeria.

Findings

The findings of this research highlight that a significant proportion of librarians are well-acquainted with the relevance of big data and its potential to positively revolutionize library services. Librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences.

Research limitations/implications

This study exclusively focuses on the Nigerian context, overlooking insights from other African countries. As a result, it may not be possible to generalize the study’s findings to the broader African library community.

Originality/value

To the best of the authors’ knowledge, this study is unique because the paper reported that librarians generally express favorable opinions concerning the relevance of big data, acknowledging its capacity to enhance decision-making, optimize services and deliver personalized user experiences.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Book part
Publication date: 31 January 2024

Anna Leditschke, Julie Nichols, Karl Farrow and Quenten Agius

The increased use of, and reliance upon, technology and digitalisation, especially in the galleries, libraries, archives and museums [GLAM] sector, has motivated innovative…

Abstract

The increased use of, and reliance upon, technology and digitalisation, especially in the galleries, libraries, archives and museums [GLAM] sector, has motivated innovative approaches to the curation of cultural material. These changes are especially evident when collaborating with Indigenous partners. Indigenous Data Governance [IDG] and Indigenous Data Sovereignty [IDS], with an emphasis on self-determination of Indigenous peoples, have called for an emerging focus on ethical and culturally sensitive approaches to data collection and management across a range of disciplines and sectors.

This chapter reports on broader discussions, specifically with mid-North South Australia, Indigenous community members around the appropriate and ethical collection, representation and curation of cultural material on Country applying digital formats. It investigates ways to create a ‘future identity’ through built form as well as providing a ‘safe’ place for preservation of their oral histories.

It highlights the many questions raised around the ethically and culturally sensitive aspects of the collection, curation and archiving of Indigenous cultural material. It documents the preliminary outcomes of these conversations in the context of current research on IDS best practices in the field. The non-Aboriginal authors acknowledge our supporting position in the realisation of effective IDS and self-determination of our Aboriginal partners.

Details

Data Curation and Information Systems Design from Australasia: Implications for Cataloguing of Vernacular Knowledge in Galleries, Libraries, Archives, and Museums
Type: Book
ISBN: 978-1-80455-615-3

Keywords

Open Access
Article
Publication date: 12 December 2022

Bianca Gualandi, Luca Pareschi and Silvio Peroni

This article describes the interviews the authors conducted in late 2021 with 19 researchers at the Department of Classical Philology and Italian Studies at the University of…

2221

Abstract

Purpose

This article describes the interviews the authors conducted in late 2021 with 19 researchers at the Department of Classical Philology and Italian Studies at the University of Bologna. The main purpose was to shed light on the definition of the word “data” in the humanities domain, as far as FAIR data management practices are concerned, and on what researchers think of the term.

Design/methodology/approach

The authors invited one researcher for each of the official disciplinary areas represented within the department and all 19 accepted to participate in the study. Participants were then divided into five main research areas: philology and literary criticism, language and linguistics, history of art, computer science and archival studies. The interviews were transcribed and analysed using a grounded theory approach.

Findings

A list of 13 research data types has been compiled thanks to the information collected from participants. The term “data” does not emerge as especially problematic, although a good deal of confusion remains. Looking at current research management practices, methodologies and teamwork appear more central than previously reported.

Originality/value

Our findings confirm that “data” within the FAIR framework should include all types of inputs and outputs humanities research work with, including publications. Also, the participants of this study appear ready for a discussion around making their research data FAIR: they do not find the terminology particularly problematic, while they rely on precise and recognised methodologies, as well as on sharing and collaboration with colleagues.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 31 October 2023

Neema Florence Mosha and Patrick Ngulube

The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.

Abstract

Purpose

The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.

Design/methodology/approach

A survey research design was employed to collect data from postgraduate students at the Nelson Mandela African Institution of Science and Technology (NM-AIST) in Arusha, Tanzania. The data were collected and analysed quantitatively and qualitatively. A census sampling technique was employed to select the sample size for this study. The quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS), whilst the qualitative data were analysed thematically.

Findings

Less than half of the respondents were aware of and were using open RDRs, including Zenodo, DataVerse, Dryad, OMERO, GitHub and Mendeley data repositories. More than half of the respondents were not willing to share research data and cited a lack of ownership after storing their research data in most of the open RDRs and data security. HILs need to conduct training on using trusted repositories and motivate postgraduate students to utilise open repositories (ORs). The challenges for underutilisation of open RDRs were a lack of policies governing the storage and sharing of research data and grant constraints.

Originality/value

Research data storage and sharing are of great interest to researchers in HILs to inform them to implement open RDRs to support these researchers. Open RDRs increase visibility within HILs and reduce research data loss, and research works will be cited and used publicly. This paper identifies the potential for additional studies focussed on this area.

Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 13 October 2023

Judit Gárdos, Julia Egyed-Gergely, Anna Horváth, Balázs Pataki, Roza Vajda and András Micsik

The present study is about generating metadata to enhance thematic transparency and facilitate research on interview collections at the Research Documentation Centre, Centre for…

Abstract

Purpose

The present study is about generating metadata to enhance thematic transparency and facilitate research on interview collections at the Research Documentation Centre, Centre for Social Sciences (TK KDK) in Budapest. It explores the use of artificial intelligence (AI) in producing, managing and processing social science data and its potential to generate useful metadata to describe the contents of such archives on a large scale.

Design/methodology/approach

The authors combined manual and automated/semi-automated methods of metadata development and curation. The authors developed a suitable domain-oriented taxonomy to classify a large text corpus of semi-structured interviews. To this end, the authors adapted the European Language Social Science Thesaurus (ELSST) to produce a concise, hierarchical structure of topics relevant in social sciences. The authors identified and tested the most promising natural language processing (NLP) tools supporting the Hungarian language. The results of manual and machine coding will be presented in a user interface.

Findings

The study describes how an international social scientific taxonomy can be adapted to a specific local setting and tailored to be used by automated NLP tools. The authors show the potential and limitations of existing and new NLP methods for thematic assignment. The current possibilities of multi-label classification in social scientific metadata assignment are discussed, i.e. the problem of automated selection of relevant labels from a large pool.

Originality/value

Interview materials have not yet been used for building manually annotated training datasets for automated indexing of scientifically relevant topics in a data repository. Comparing various automated-indexing methods, this study shows a possible implementation of a researcher tool supporting custom visualizations and the faceted search of interview collections.

Article
Publication date: 23 March 2023

Mohd Naz’ri Mahrin, Anusuyah Subbarao, Suriayati Chuprat and Nur Azaliah Abu Bakar

Cloud computing promises dependable services offered through next-generation data centres based on virtualization technologies for computation, network and storage. Big Data…

Abstract

Purpose

Cloud computing promises dependable services offered through next-generation data centres based on virtualization technologies for computation, network and storage. Big Data Applications have been made viable by cloud computing technologies due to the tremendous expansion of data. Disaster management is one of the areas where big data applications are rapidly being deployed. This study looks at how big data is being used in conjunction with cloud computing to increase disaster risk reduction (DRR). This paper aims to explore and review the existing framework for big data used in disaster management and to provide an insightful view of how cloud-based big data platform toward DRR is applied.

Design/methodology/approach

A systematic mapping study is conducted to answer four research questions with papers related to Big Data Analytics, cloud computing and disaster management ranging from the year 2013 to 2019. A total of 26 papers were finalised after going through five steps of systematic mapping.

Findings

Findings are based on each research question.

Research limitations/implications

A specific study on big data platforms on the application of disaster management, in general is still limited. The lack of study in this field is opened for further research sources.

Practical implications

In terms of technology, research in DRR leverage on existing big data platform is still lacking. In terms of data, many disaster data are available, but scientists still struggle to learn and listen to the data and take more proactive disaster preparedness.

Originality/value

This study shows that a very famous platform selected by researchers is central processing unit based processing, namely, Apache Hadoop. Apache Spark which uses memory processing requires a big capacity of memory, therefore this is less preferred in the world of research.

Details

Journal of Science and Technology Policy Management, vol. 14 no. 6
Type: Research Article
ISSN: 2053-4620

Keywords

1 – 10 of over 5000