Search results

1 – 10 of over 9000
Article
Publication date: 19 July 2013

Martin Hubert Ofner, Kevin Straub, Boris Otto and Hubert Oesterle

The purpose of the paper is to propose a reference model describing a holistic view of the master data lifecycle, including strategic, tactical and operational aspects. The Master…

3551

Abstract

Purpose

The purpose of the paper is to propose a reference model describing a holistic view of the master data lifecycle, including strategic, tactical and operational aspects. The Master Data Lifecycle Management (MDLM) map provides a structured approach to analyze the master data lifecycle.

Design/methodology/approach

Embedded in a design oriented research process, the paper applies the Component Business Model (CBM) method and suggests a reference model which identifies the business components required to manage the master data lifecycle. CBM is a patented IBM method to analyze the key components of a business domain. The paper uses a participative case study to evaluate the suggested model.

Findings

Based on a participative case study, the paper shows how the reference model makes it possible to analyze the master data lifecycle on a strategic, a tactical and an operational level, and how it helps identify areas of improvement.

Research limitations/implications

The paper presents design work and a participative case study. The reference model is grounded in existing literature and represents a comprehensive framework forming the foundation for future analysis of the master data lifecycle. Furthermore, the model represents an abstraction of an organization's master data lifecycle. Hence, it forms a “theory for designing”. More research is needed in order to more thoroughly evaluate the presented model in a variety of real‐life settings.

Practical implications

The paper shows how the reference model enables practitioners to analyze the master data lifecycle and how it helps identify areas of improvement.

Originality/value

The paper reports on an attempt to establish a holistic view of the master data lifecycle, including strategic, tactical and operational aspects, in order to provide more comprehensive support for its analysis and improvement.

Details

Journal of Enterprise Information Management, vol. 26 no. 4
Type: Research Article
ISSN: 1741-0398

Keywords

Article
Publication date: 19 April 2018

Andrew Martin Cox and Winnie Wan Ting Tam

Visualisations of research and research-related activities including research data management (RDM) as a lifecycle have proliferated in the last decade. The purpose of this paper…

2822

Abstract

Purpose

Visualisations of research and research-related activities including research data management (RDM) as a lifecycle have proliferated in the last decade. The purpose of this paper is to offer a systematic analysis and critique of such models.

Design/methodology/approach

A framework for analysis synthesised from the literature presented and applied to nine examples.

Findings

The strengths of the lifecycle representation are to clarify stages in research and to capture key features of project-based research. Nevertheless, their weakness is that they typically mask various aspects of the complexity of research, constructing it as highly purposive, serial, uni-directional and occurring in a somewhat closed system. Other types of models such as spiral of knowledge creation or the data journey reveal other stories about research. It is suggested that we need to develop other metaphors and visualisations around research.

Research limitations/implications

The paper explores the strengths and weaknesses of the popular lifecycle model for research and RDM, and also considers alternative ways of representing them.

Practical implications

Librarians use lifecycle models to explain service offerings to users so the analysis will help them identify clearly the best type of representation for particular cases. The critique offered by the paper also reveals that because researchers do not necessarily identify with a lifecycle representation, alternative ways of representing research need to be developed.

Originality/value

The paper offers a systematic analysis of visualisations of research and RDM current in the Library and Information Studies literature revealing the strengths and weaknesses of the lifecycle metaphor.

Details

Aslib Journal of Information Management, vol. 70 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 6 August 2018

Shivam Gupta and Claudia Müller-Birn

The traditional means of pursuing research by having all the parameters and processes under one roof has given way to collaborative mechanisms of performing the same task…

Abstract

Purpose

The traditional means of pursuing research by having all the parameters and processes under one roof has given way to collaborative mechanisms of performing the same task. Collaborative work increases the quality of research and it is a big contributing factor to augment the growth of the scientific knowledge. This process leads to training of new and well-informed academicians and scientists. e-Research (Electronic Research) has gained significant amount of traction as technology serves as the backbone for undertaking collaborative research. The purpose of this paper is to provide a synoptic view of existing research surrounding e-Research and suggest a data lifecycle model that can improve the outcome of collaborative research.

Design/methodology/approach

Systematic literature review methodology has been employed to undertake this study. Using the outcome of the literature review and the analysis of the existing data lifecycle models, an improvised version of the data lifecycle model has been suggested.

Findings

This study has brought a conceptual model for data lifecycle for collaborative research. The literature review in the domain of e-Research has shown that the focus of these papers was on the following stages of data lifecycle model: concept and design, data collection, data processing, sharing and distribution of data and data analysis.

Research limitations/implications

In this paper, only journal papers have been considered and conference proceedings have not been included for literature review.

Originality/value

This paper suggests a conceptual model for the data lifecycle for collaborative research. This study can be useful for academic and research institutions to design their data lifecycle model.

Details

Benchmarking: An International Journal, vol. 25 no. 6
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 2 July 2020

Johann Van Wyk, Theo Bothma and Marlene Holmner

The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South…

374

Abstract

Purpose

The purpose of this article is to give an overview of the development of a Virtual Research Environment (VRE) conceptual model for the management of research data at a South African university.

Design/methodology/approach

The research design of this article consists of empirical and non-empirical research. The non-empirical part consists of a critical literature review to synthesise the strengths, weaknesses (limitations) and omissions of identified VRE models as found in literature to develop a conceptual VRE model. As part of the critical literature review concepts were clarified and possible applications of VREs in research lifecycles and research data lifecycles were explored. The empirical part focused on the practical application of this model. This part of the article follows an interpretivist paradigm, and a qualitative research approach, using case studies as inquiry method. Case studies with a positivist perspective were selected through purposive sampling, and inferences were drawn from the sample to design and test a conceptual VRE model, and to investigate the management of research data through a VRE. Investigation was done through a process of participatory action research (PAR) and included semi-structured interviews and participant observation data collection techniques. Evaluation of findings was done through formative and summative evaluation.

Findings

The article presents a VRE conceptual model, with identified generic component layers and components that could potentially be applied and used in different research settings/disciplines. The article also reveals the role that VREs play in the successful management of research data throughout the research lifecycle. Guidelines for setting up a conceptual VRE model are offered.

Practical implications

This article assisted in clarifying and validating the various components of a conceptual VRE model that could be used in different research settings and disciplines for research data management.

Originality/value

This article confirms/validates generic layers and components that would be needed in a VRE by synthesising these in a conceptual model in the context of a research lifecycle and presents guidelines for setting up a conceptual VRE model.

Details

Library Management, vol. 41 no. 6/7
Type: Research Article
ISSN: 0143-5124

Keywords

Article
Publication date: 12 September 2016

Alex H. Poole

The purpose of this paper is to define and describe digital curation, an emerging field of theory and practice in the information professions that embraces digital preservation…

4952

Abstract

Purpose

The purpose of this paper is to define and describe digital curation, an emerging field of theory and practice in the information professions that embraces digital preservation, data curation, and management of information assets over their lifecycle. It dissects key issues and debates in the area while arguing that digital curation is a vital strategy for dealing with the so-called data deluge.

Design/methodology/approach

This paper explores digital curation’s potential to provide an improved return on investment in data work.

Findings

A vital counterweight to the problem of data loss, digital curation also adds value to trusted data assets for current and future use. This paper unpacks data, the research enterprise, the roles and responsibilities of digital curation professionals, the data lifecycle, metadata, sharing and reuse, scholarly communication (cyberscholarship, publication and citation, and rights), infrastructure (archives, centers, libraries, and institutional repositories), and overarching issues (standards, governance and policy, planning and data management plans, risk management, evaluation, and metrics, sustainability, and outreach).

Originality/value

A critical discussion that focusses on North America and the UK, this paper synthesizes previous findings and conclusions in the area of digital curation. It has value for digital curation professionals and researchers as well as students in library and information science who may deal with data in the future. This paper helps potential stakeholders understand the intellectual and practical framework and the importance of digital curation in adding value to scholarly (science, social science, and humanities) and other types of data. This paper suggests the need for further empirical research, not only in exploring the actual sharing and reuse practices of various sectors, disciplines, and domains, but also in considering the the data lifecycle, the potential role of archivists, funding and sustainability, outreach and awareness-raising, and metrics.

Details

Journal of Documentation, vol. 72 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 8 October 2018

Majed Alshammari and Andrew Simpson

Concerns over data-processing activities that may lead to privacy violations or harms have motivated the development of legal frameworks and standards. Further, software engineers…

Abstract

Purpose

Concerns over data-processing activities that may lead to privacy violations or harms have motivated the development of legal frameworks and standards. Further, software engineers are increasingly expected to develop and maintain privacy-aware systems that both comply with such frameworks and standards and meet reasonable expectations of privacy. This paper aims to facilitate reasoning about privacy compliance, from legal frameworks and standards, with a view to providing necessary technical assurances.

Design/methodology/approach

The authors show how the standard extension mechanisms of the UML meta-model might be used to specify and represent data-processing activities in a way that is amenable to privacy compliance checking and assurance.

Findings

The authors demonstrate the usefulness and applicability of the extension mechanisms in specifying key aspects of privacy principles as assumptions and requirements, as well as in providing criteria for the evaluation of these aspects to assess whether the model meets these requirements.

Originality/value

First, the authors show how key aspects of abstract privacy principles can be modelled using stereotypes and tagged values as privacy assumptions and requirements. Second, the authors show how compliance with these principles can be assured via constraints that establish rules for the evaluation of these requirements.

Details

Information & Computer Security, vol. 26 no. 4
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 24 January 2023

Li Si, Li Liu and Yi He

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a…

Abstract

Purpose

This paper aims to understand the current development situation of scientific data management policy in China, analyze the content structure of the policy and provide a theoretical basis for the improvement and optimization of the policy system.

Design/methodology/approach

China's scientific data management policies were obtained through various channels such as searching government websites and policy and legal database, and 209 policies were finally identified as the sample for analysis after being screened and integrated. A three-dimensional framework was constructed based on the perspective of policy tools, combining stakeholder and lifecycle theories. And the content of policy texts was coded and quantitatively analyzed according to this framework.

Findings

China's scientific data management policies can be divided into four stages according to the time sequence: infancy, preliminary exploration, comprehensive promotion and key implementation. The policies use a combination of three types of policy tools: supply-side, environmental-side and demand-side, involving multiple stakeholders and covering all stages of the lifecycle. But policy tools and their application to stakeholders and lifecycle stages are imbalanced. The development of future scientific data management policy should strengthen the balance of policy tools, promote the participation of multiple subjects and focus on the supervision of the whole lifecycle.

Originality/value

This paper constructs a three-dimensional analytical framework and uses content analysis to quantitatively analyze scientific data management policy texts, extending the research perspective and research content in the field of scientific data management. The study identifies policy focuses and proposes several strategies that will help optimize the scientific data management policy.

Details

Aslib Journal of Information Management, vol. 76 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 10 February 2012

Jake Carlson

As libraries become more involved in curating research data, reference librarians will need to be trained in conducting data interviews with researchers to better understand their…

3849

Abstract

Purpose

As libraries become more involved in curating research data, reference librarians will need to be trained in conducting data interviews with researchers to better understand their data and associated needs. This article seeks to identify and provide definitions for the basic terms and concepts of data curation for librarians to properly frame and carry out a data interview using the Data Curation Profiles (DCP) Toolkit.

Design/methodology/approach

The DCP Toolkit is a semi‐structured interview designed to assist librarians in identifying the data curation needs of researchers. The components of the DCP Toolkit were analyzed to determine the base level of knowledge needed for librarians to conduct effective data interviews. Specific concepts, definitions, and examples were sought through a review of articles, case studies, practitioner resources and from the experiences of the Purdue University Libraries.

Findings

Data curation concepts and terminology are not yet well‐defined and often vary across, or even within fields of study. This research informed the development of a workshop to train librarians in using the DCP Toolkit. The definitions and concepts addressed in the workshop include: data, data set, data lifecycle, data curation, data sharing, and roles for reference librarians.

Practical implications

Conducting a data interview can be a daunting task given the complexity of data curation and the lack of shared definitions. Practical tools and training are needed to help librarians develop capacity in data curation.

Originality/value

This article provides practical information for public service librarians to help them conceptualize and conduct a data interview with researchers.

Details

Reference Services Review, vol. 40 no. 1
Type: Research Article
ISSN: 0090-7324

Keywords

Article
Publication date: 15 November 2018

Hsia-Ching Chang, Chen-Ya Wang and Suliman Hawamdeh

This paper aims to investigate emerging trends in data analytics and knowledge management (KM) job market by using the knowledge, skills and abilities (KSA) framework. The…

2731

Abstract

Purpose

This paper aims to investigate emerging trends in data analytics and knowledge management (KM) job market by using the knowledge, skills and abilities (KSA) framework. The findings from the study provide insights into curriculum development and academic program design.

Design/methodology/approach

This study traced and retrieved job ads on LinkedIn to understand how data analytics and KM interplay in terms of job functions, knowledge, skills and abilities required for jobs, as well as career progression. Conducting content analysis using text analytics and multiple correspondence analysis, this paper extends the framework of KSA proposed by Cegielski and Jones‐Farmer to the field of data analytics and KM.

Findings

Using content analysis, the study analyzes the requisite KSA that connect analytics to KM from the job demand perspective. While Kruskal–Wallis tests assist in examining the relationships between different types of KSA and company’s characteristics, multiple correspondence analysis (MCA) aids in reducing dimensions and representing the KSA data points in two-dimensional space to identify potential associations between levels of categorical variables. The results from the Kruskal–Wallis tests indicate a significant relationship between job experience levels and KSA. The MCA diagrams illustrate key distinctions between hard and soft skills in data across different experience levels.

Practical implications

The practical implications of the study are two-fold. First, the extended KSA framework can guide KM professionals with their career planning toward data analytics. Second, the findings can inform academic institutions with regard to broadening and refining their data analytics or KM curricula.

Originality/value

This paper is one of the first studies to investigate the connection between data analytics and KM from the job demand perspective. It contributes to the ongoing discussion and provides insights into curriculum development and academic program design.

Details

Journal of Knowledge Management, vol. 23 no. 4
Type: Research Article
ISSN: 1367-3270

Keywords

Article
Publication date: 18 January 2023

Siti Wahida Amanullah and A. Abrizah

The debate about academic librarians’ roles in research data management (RDM) services is currently relevant, especially in the context of making research data findable…

Abstract

Purpose

The debate about academic librarians’ roles in research data management (RDM) services is currently relevant, especially in the context of making research data findable, accessible, interoperable and reproducible. This study aims to explore the RDM services offered by Malaysian academic libraries and the implementation progress based on the librarians’ practices and roles.

Design/methodology/approach

This descriptive study involves three sequential forms of data collection: a website analysis of 20 academic libraries relating to RDM services, training and policy; an online survey of the academic libraries’ RDM implementation progress; and semi-structured interviews with three academic librarians to gauge their practices and roles in RDM services.

Findings

Malaysian academic libraries provide RDM services based on their related or basic skills which are bibliographic management tools, institutional repository and openness of research data rather than impacted services to support RDM, such as data analysis, data citation, data mining or data visualisation services. Although the librarians were aware of RDM and their roles in research data services, the progress of practicing and implementation of the RDM services still has not been fully delivered to support the main RDM elements.

Practical implications

This study illustrates the RDM roadmap on the current landscape of areas and types of services that the libraries are doing well. The list of services can be used and implemented as the best practices or strategies to be applied within Malaysian academic libraries.

Originality/value

This study highlights the gaps of RDM services in Malaysian academic libraries. To the best of the authors’ knowledge, as this is the first study in Malaysia that articulates the case of RDM services in academic libraries, it has paved the way for further research.

1 – 10 of over 9000