Search results

1 – 10 of 294
Content available
Article
Publication date: 28 June 2023

Javaid Ahmad Wani, Taseef Ayub Sofi, Ishrat Ayub Sofi and Shabir Ahmad Ganaie

Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate…

Abstract

Purpose

Open-access repositories (OARs) are essential for openly disseminating intellectual knowledge on the internet and providing free access to it. The current study aims to evaluate the growth and development of OARs in the field of technology by investigating several characteristics such as coverage, OA policies, software type, content type, yearly growth, repository type and geographic contribution.

Design/methodology/approach

The directory of OARs acts as the source for data harvesting, which provides a quality-assured list of OARs across the globe.

Findings

The study found that 125 nations contributed a total of 4,045 repositories in the field of research, with the USA leading the list with the most repositories. Maximum repositories were operated by institutions having multidisciplinary approaches. The DSpace and Eprints were the preferred software types for repositories. The preferred upload content by contributors was “research articles” and “electronic thesis and dissertations”.

Research limitations/implications

The study is limited to the subject area technology as listed in OpenDOAR; therefore, the results may differ in other subject areas.

Practical implications

The work can benefit researchers across disciplines and, interested researchers can take this study as a base for evaluating online repositories. Moreover, policymakers and repository managers could also get benefitted from this study.

Originality/value

The study is the first of its kind, to the best of the authors’ knowledge, to investigate the repositories of subject technology in the open-access platform.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 30 January 2024

Li Si and Xianrui Liu

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the…

Abstract

Purpose

This research aims to explore the research data ethics governance framework and collaborative network to optimize research data ethics governance practices, to balance the relationship between data development and utilization, open sharing, data security and to reduce the ethical risks that may arise from data sharing and utilization.

Design/methodology/approach

This study explores the framework and collaborative network of research data ethics policies by using the UK as an example. 78 policies from the UK government, university, research institution, funding agency, publisher, database, library and third-party organization are obtained. Adopting grounded theory (GT) and social network analysis (SNA), Nvivo12 is used to analyze these samples and summarize the research data ethics governance framework. Ucinet and Netdraw are used to reveal collaborative networks in policy.

Findings

Results indicate that the framework covers governance context, subject and measure. The content of governance context contains context description and data ethics issues analysis. Governance subject consists of defining subjects and facilitating their collaboration. Governance measure includes governance guidance and ethics governance initiatives in the data lifecycle. The collaborative network indicates that research institution plays a central role in ethics governance. The core of the governance content are ethics governance initiatives, governance guidance and governance context description.

Research limitations/implications

This research provides new insights for policy analysis by combining GT and SNA methods. Research data ethics and its governance are conceptualized to complete data governance and research ethics theory.

Practical implications

A research data ethics governance framework and collaborative network are revealed, and actionable guidance for addressing essential aspects of research data ethics and multiple subjects to confer their functions in collaborative governance is provided.

Originality/value

This study analyzes policy text using qualitative and quantitative methods, ensuring fine-grained content profiling and improving policy research. A typical research data ethics governance framework is revealed. Various stakeholders' roles and priorities in collaborative governance are explored. These contribute to improving governance policies and governance levels in both theory and practice.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 19 April 2023

Aasif Mohammad Khan, Fayaz Ahmad Loan, Umer Yousuf Parray and Sozia Rashid

Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery…

Abstract

Purpose

Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery forward. Given the importance of data sharing, the purpose of the study is to unveil the present scenario of research data repositories (RDR) and sheds light on strategies and tactics followed by different countries for efficient organization and optimal use of scientific literature.

Design/methodology/approach

The data for the study is collected from registry of RDR (re3data registry) (re3data.org), which covers RDR from different academic disciplines and provides filtration options “Search” and “Browse” to access the repositories. Using these filtration options, the researchers collected metadata of repositories i.e. country wise contribution, content-type data, repository language interface, software usage, metadata standards and data access type. Furthermore, the data was exported to Google Sheets for analysis and visualization.

Findings

The re3data registry holds a rich and diverse collection of data repositories from the majority of countries all over the world. It is revealed that English is the dominant language, and the most widely used software for the creation of data repositories are “DataVerse”, followed by “Dspace” and “MySQL”. The most frequently used metadata standards are “Dublin Core” and “Datacite metadata schema”. The majority of repositories are open, with more than half of the repositories being “disciplinary” in nature, and the most significant data sources include “scientific and statistical data” followed by “standard office documents”.

Research limitations/implications

The main limitation of the study is that the findings are based on the data collected through a single registry of repositories, and only a few characteristic features were investigated.

Originality/value

The study will benefit all countries with a small number of data repositories or no repositories at all, with tools and techniques used by the top repositories to ensure long-term storage and accessibility to research data. In addition to this, the study provides a global overview of RDR and its characteristic features.

Details

Information Discovery and Delivery, vol. 52 no. 1
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 25 January 2024

Besiki Stvilia and Dong Joon Lee

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data…

Abstract

Purpose

This study addresses the need for a theory-guided, rich, descriptive account of research data repositories' (RDRs) understanding of data quality and the structures of their data quality assurance (DQA) activities. Its findings can help develop operational DQA models and best practice guides and identify opportunities for innovation in the DQA activities.

Design/methodology/approach

The study analyzed 122 data repositories' applications for the Core Trustworthy Data Repositories, interview transcripts of 32 curators and repository managers and data curation-related webpages of their repository websites. The combined dataset represented 146 unique RDRs. The study was guided by a theoretical framework comprising activity theory and an information quality evaluation framework.

Findings

The study provided a theory-based examination of the DQA practices of RDRs summarized as a conceptual model. The authors identified three DQA activities: evaluation, intervention and communication and their structures, including activity motivations, roles played and mediating tools and rules and standards. When defining data quality, study participants went beyond the traditional definition of data quality and referenced seven facets of ethical and effective information systems in addition to data quality. Furthermore, the participants and RDRs referenced 13 dimensions in their DQA models. The study revealed that DQA activities were prioritized by data value, level of quality, available expertise, cost and funding incentives.

Practical implications

The study's findings can inform the design and construction of digital research data curation infrastructure components on university campuses that aim to provide access not just to big data but trustworthy data. Communities of practice focused on repositories and archives could consider adding FAIR operationalizations, extensions and metrics focused on data quality. The availability of such metrics and associated measurements can help reusers determine whether they can trust and reuse a particular dataset. The findings of this study can help to develop such data quality assessment metrics and intervention strategies in a sound and systematic way.

Originality/value

To the best of the authors' knowledge, this paper is the first data quality theory guided examination of DQA practices in RDRs.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 20 November 2023

Laksmi Laksmi, Muhammad Fadly Suhendra, Shamila Mohamed Shuhidan and Umanto Umanto

This study aims to identify the readiness of institutional repositories in Indonesia to implement digital humanities (DH) data curation. Data curation is a method of managing…

Abstract

Purpose

This study aims to identify the readiness of institutional repositories in Indonesia to implement digital humanities (DH) data curation. Data curation is a method of managing research data that maintains the data’s accuracy and makes it available for reuse. It requires controlled data management.

Design/methodology/approach

The study uses a qualitative approach. Data collection was carried out through a focus group discussion in September–October 2022, interviews and document analysis. The informants came from four institutions in Indonesia.

Findings

The findings reveal that the national research repository has implemented data curation, albeit not optimally. Within the case study, one of the university repositories diligently curates its humanities data and has established networks extending to various ASEAN countries. Both the national archive repository and the other university repository have implemented rudimentary data curation practices but have not prioritized them. In conclusion, the readiness of the national research repository and the university repository stand at the high-capacity stage, while the national archive repository and the other university repository are at the established and early stages of data curation, respectively.

Research limitations/implications

This study examined only four repositories due to time constraints. Nonetheless, the four institutions were able to provide a comprehensive picture of their readiness for DH data curation management.

Practical implications

This study provides insight into strategies for developing DH data curation activities in institutional repositories. It also highlights the need for professional development for curators so they can devise and implement stronger ownership policies and data privacy to support a data-driven research agenda.

Originality/value

This study describes the preparations that must be considered by institutional repositories in the development of DH data curation activities.

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 20 November 2023

Nkeiru A. Emezie, Scholastica A.J. Chukwu, Ngozi M. Nwaohiri, Nancy Emerole and Ijeoma I. Bernard

University intellectual output such as theses and dissertations are valuable resources containing rigorous research results. Library staff who are key players in promoting…

Abstract

Purpose

University intellectual output such as theses and dissertations are valuable resources containing rigorous research results. Library staff who are key players in promoting intellectual output through institutional repositories require skills to promote content visibility, create wider outreach and facilitate easy access and use of these resources. This study aims to determine the skills of library staff to enhance the visibility of intellectual output in federal university libraries in southeast Nigeria.

Design/methodology/approach

A survey research design was adopted for the study. The questionnaire was used to obtain responses from library staff on the extent of computer skills and their abilities for digital conversion, metadata creation and preservation of digital content.

Findings

Library staff at the university libraries had high skills in basic computer operations. They had moderate skills in digital conversion, preservation and storage. However, they had low skills in metadata creation.

Practical implications

The study has implications for addressing the digital skills and professional expertise of library staff, especially as it concerns metadata creation, digital conversion, preservation and storage. It also has implications for the university management to prioritize the training of their library staff in other to increase the visibility of indigenous resources and university Web ranking.

Originality/value

This study serves as a lens to identify library staff skill gaps in many critical areas that require expertise and stimulate conscious effort toward developing adequate skills for effective digital information provision. It sheds light on the challenges that many Nigerian university libraries face in their pursuit of global visibility and university Web ranking.

Details

Digital Library Perspectives, vol. 40 no. 1
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 29 January 2024

Klaudia Jaskula, Dimosthenis Kifokeris, Eleni Papadonikolaki and Dimitrios Rovas

Information management workflow in building information modelling (BIM)-based collaboration is based on using a common data environment (CDE). The basic premise of a CDE is…

Abstract

Purpose

Information management workflow in building information modelling (BIM)-based collaboration is based on using a common data environment (CDE). The basic premise of a CDE is exposing all relevant data as a single source of truth and facilitating continuous collaboration between stakeholders. A multitude of tools can be used as a CDE, however, it is not clear how the tools are used or if they fulfil the users’ needs. Therefore, this paper aims to investigate current practices of using CDEs for information management during the whole built asset’s life cycle, through a state-of-the-art literature review and an empirical study.

Design/methodology/approach

Literature data is collected according to the PRISMA 2020 guideline for reporting systematic reviews. This paper includes 46 documents in the review and conduct a bibliometric and thematic analysis to identify the main challenges of digital information management. To understand the current practice and the views of the stakeholders using CDEs in their work, this paper used an empirical approach including semi-structured interviews with 15 BIM experts.

Findings

The results indicate that one of the major challenges of CDE adoption is project complexity and using multiple CDEs simultaneously leading to data accountability, transparency and reliability issues. To tackle those challenges, the use of novel technologies in CDE development such as blockchain could be further investigated.

Originality/value

The research explores the major challenges in the practical implementation of CDEs for information management. To the best of the authors’ knowledge, this is the first study on this topic combining a systematic literature review and fieldwork.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 11 May 2023

Helen Crompton, Mildred V. Jones, Yaser Sendi, Maram Aizaz, Katherina Nako, Ricardo Randall and Eric Weisel

The purpose of this study is to determine what technological strategies were used within each of the phases of the ADDIE framework when developing content for professional…

608

Abstract

Purpose

The purpose of this study is to determine what technological strategies were used within each of the phases of the ADDIE framework when developing content for professional training. The study also examined the affordances of those technologies in training.

Design/methodology/approach

A PRISMA systematic review methodology (Moher et al., 2015) was utilized to answer the four questions guiding this study. Specifically, the PRISMA extension Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Protocols (PRISMA-P, Moher et al., 2015) was used to direct each stage of the research, from the literature review to the conclusion. In addition, the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA principles; Liberati et al., 2009) are used to guide the article selection process.

Findings

The findings reveal that the majority of the studies were in healthcare (36%) and education (24%) and used an online format (65%). There was a wide distribution of ADDIE used with technology across the globe. The coding for the benefits of technology use in the development of the training solution revealed four trends: 1) usability, 2) learning approaches, 3) learner experience and 4) financial.

Research limitations/implications

This systematic review only examined articles published in English, which may bias the findings to a Western understanding of how technology is used within the ADDIE framework. Furthermore, the study examined only peer-review academic articles from scholarly journals and conferences. While this provided a high level of assurance about the quality of the studies, it does not include other reports directly from training providers and other organizations.

Practical implications

These findings can be used as a springboard for training providers, scholars, funders and practitioners, providing rigorous insight into how technology has been used within the ADDIE framework, the types of technology, and the benefits of using technology. This insight can be used when designing future training solutions with a better understanding of how technology can support learning.

Social implications

This study provides insight into the uses of technology in training. Many of these findings and uses of technology within ADDIE can also transfer to other aspects of society.

Originality/value

This study is unique in that it provides the scholarly community with the first systematic review to examine what technological strategies were used within each of the phases of the ADDIE structure and how these technologies provided benefits to developing a training solution.

Details

European Journal of Training and Development, vol. 48 no. 3/4
Type: Research Article
ISSN: 2046-9012

Keywords

1 – 10 of 294