Search results
1 – 10 of 76Fabrice Nzepang, Siméon Serge Atangana and Saturnin Bertrand Nguenda Anya
This work aims to assess the effects of information and communication technology (ICT) on inequalities in access to professional training (PT) in Cameroon.
Abstract
Purpose
This work aims to assess the effects of information and communication technology (ICT) on inequalities in access to professional training (PT) in Cameroon.
Design/methodology/approach
This study used data from the fourth Cameroonian Household Survey (ECAM 4), the concentration index (CI) calculations and the Wagstaff et al. (2003) decomposition.
Findings
The preliminary results show that the CI calculations by groups of individuals reveal the existence of significant inequalities in favour of the poor. This is the case for all groups of individuals who use ICT tools, namely radio, internet, telephone and television. The results of the Wagstaff et al. (2003) decomposition reveal that an equitable distribution of income between those who use and those who do not use the telephone, radio and internet reduces inequalities in access to FP in favour of the poor.
Originality/value
Despite the wealth of literature devoted to the study of inequalities in access to education, the consideration of PT is still very marginal. In Cameroon, the literature devoted to the study of inequalities in access to PT is still almost non-existent, probably because of a low level of interest in the scientific community. However, as just seen, PT is a tool for combating unemployment, particularly in economies where the informal sector is important, insofar as the proportion of unemployed and inactive people is very low amongst the ones that have taken a PT course. Moreover, studies on the effects of ICT on inequalities in access to PT are still rare in the literature.
Details
Keywords
Aasif Mohammad Khan, Fayaz Ahmad Loan, Umer Yousuf Parray and Sozia Rashid
Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery…
Abstract
Purpose
Data sharing is increasingly being recognized as an essential component of scholarly research and publishing. Sharing data improves results and propels research and discovery forward. Given the importance of data sharing, the purpose of the study is to unveil the present scenario of research data repositories (RDR) and sheds light on strategies and tactics followed by different countries for efficient organization and optimal use of scientific literature.
Design/methodology/approach
The data for the study is collected from registry of RDR (re3data registry) (re3data.org), which covers RDR from different academic disciplines and provides filtration options “Search” and “Browse” to access the repositories. Using these filtration options, the researchers collected metadata of repositories i.e. country wise contribution, content-type data, repository language interface, software usage, metadata standards and data access type. Furthermore, the data was exported to Google Sheets for analysis and visualization.
Findings
The re3data registry holds a rich and diverse collection of data repositories from the majority of countries all over the world. It is revealed that English is the dominant language, and the most widely used software for the creation of data repositories are “DataVerse”, followed by “Dspace” and “MySQL”. The most frequently used metadata standards are “Dublin Core” and “Datacite metadata schema”. The majority of repositories are open, with more than half of the repositories being “disciplinary” in nature, and the most significant data sources include “scientific and statistical data” followed by “standard office documents”.
Research limitations/implications
The main limitation of the study is that the findings are based on the data collected through a single registry of repositories, and only a few characteristic features were investigated.
Originality/value
The study will benefit all countries with a small number of data repositories or no repositories at all, with tools and techniques used by the top repositories to ensure long-term storage and accessibility to research data. In addition to this, the study provides a global overview of RDR and its characteristic features.
Details
Keywords
Rafiq Ahmad and Muhammad Rafiq
The purpose of this study is to present some critical digital preservation strategies that are important for the preservation of digital information.
Abstract
Purpose
The purpose of this study is to present some critical digital preservation strategies that are important for the preservation of digital information.
Design/methodology/approach
From review of the related studies, this paper presents critical digital preservation techniques that are vital for small libraries to ensure the accessibility of the digital collections of enduring value.
Findings
This paper comprehends major digital preservation strategies and possibilities for small libraries through which they can overcome the financial, technological, expertise and policy constraints to implement their digital preservation program.
Originality/value
This paper covers the major strategies that were collated during literature review and instrumentation process for the PhD study of the first author on this topic.
Details
Keywords
Heini Utunen, Ranil Appuhamy, Melissa Attias, Ngouille Ndiaye, Richelle George, Elham Arabi and Anna Tokar
OpenWHO is the World Health Organization's online learning platform that was launched in 2017. The COVID-19 pandemic led to massive growth in the number of courses, enrolments and…
Abstract
Purpose
OpenWHO is the World Health Organization's online learning platform that was launched in 2017. The COVID-19 pandemic led to massive growth in the number of courses, enrolments and reach of the platform. The platform is built on a stable and scalable basis that can host a large volume of learners. The authors aim to identify key factors that led to this growth.
Design/methodology/approach
In this research paper, the authors examined OpenWHO metadata, end-of-course surveys and internal processes using a quantitative approach.
Findings
OpenWHO metadata showed that the platform has hosted over 190 health courses in 65 languages and over seven million course enrolments. Since the onset of the pandemic, there have been more women, older people and people from middle income countries accessing courses than before. Following data analysis of the platform metadata and course production process, it was found that several key factors contributed to the growth of the platform. First, OpenWHO has a standardised course production pathway that ensures efficiency, consistency and quality. Further, providing courses in different languages increased its reach to a variety of populations throughout the world. For this, multi-language translation is achieved through a network of translators and an automated system to ensure the efficient translation of learning products. Lastly, it was found that access was promoted for learners with disabilities by optimising accessibility in course production. Data analysis of learner feedback surveys for selected courses showed that the courses were well received in that learners found it useful to complete courses that were self-paced and flexible. In addition, results indicated that preferred learning methods included videos, downloadable documents, slides, quizzes and learning exercises.
Originality/value
Lessons learnt from the WHO's learning response will help prepare researchers for the next health emergency to ensure timely, equitable access to quality health knowledge for everyone. Findings of this study will provide valuable insights for educators, policymakers and researchers in the field who intend to use online learning to optimise knowledge acquisition and performance.
Details
Keywords
Archana S.N. and Padmakumar P.K.
The purpose of this study was to understand the landscape of Indian research data repositories (RDRs) indexed in the re3data.org. The study analysed the metadata elements of…
Abstract
Purpose
The purpose of this study was to understand the landscape of Indian research data repositories (RDRs) indexed in the re3data.org. The study analysed the metadata elements of Indian RDRs to identify their disciplinary orientations, typology, standards adopted, foreign collaborations, etc. The study ascertained the current status of the Indian RDRs by visiting their respective websites and tried to identify and map the exact disciplinary orientation of each RDR.
Design/methodology/approach
The study used “content analysis” of the metadata elements extracted from re3data.org along with the information analysis of the respective websites of the registered RDRs.
Findings
The study identified that only 80% of the Indian RDRs listed by the re3data.org is currently active. Most of the Indian RDRs are hosted by the central and state governments and are almost equally distributed among Life Sciences, Natural Sciences and Social Sciences domains. The data provided by the re3data.org for the Indian RDRs are not complete and up-to-date.
Practical implications
The findings indicate the presence of a good number of inactive RDRs in the re3data.org. The study suggests using a revised version of the DFG subject classification scheme or considering a standard classification scheme for subject indexing.
Originality/value
To the best of the authors’ knowledge, this study is the first of its kind that critically analysed the metadata values extracted and moved further to identify the current status of Indian RDRs.
Details
Keywords
Judit Gárdos, Julia Egyed-Gergely, Anna Horváth, Balázs Pataki, Roza Vajda and András Micsik
The present study is about generating metadata to enhance thematic transparency and facilitate research on interview collections at the Research Documentation Centre, Centre for…
Abstract
Purpose
The present study is about generating metadata to enhance thematic transparency and facilitate research on interview collections at the Research Documentation Centre, Centre for Social Sciences (TK KDK) in Budapest. It explores the use of artificial intelligence (AI) in producing, managing and processing social science data and its potential to generate useful metadata to describe the contents of such archives on a large scale.
Design/methodology/approach
The authors combined manual and automated/semi-automated methods of metadata development and curation. The authors developed a suitable domain-oriented taxonomy to classify a large text corpus of semi-structured interviews. To this end, the authors adapted the European Language Social Science Thesaurus (ELSST) to produce a concise, hierarchical structure of topics relevant in social sciences. The authors identified and tested the most promising natural language processing (NLP) tools supporting the Hungarian language. The results of manual and machine coding will be presented in a user interface.
Findings
The study describes how an international social scientific taxonomy can be adapted to a specific local setting and tailored to be used by automated NLP tools. The authors show the potential and limitations of existing and new NLP methods for thematic assignment. The current possibilities of multi-label classification in social scientific metadata assignment are discussed, i.e. the problem of automated selection of relevant labels from a large pool.
Originality/value
Interview materials have not yet been used for building manually annotated training datasets for automated indexing of scientifically relevant topics in a data repository. Comparing various automated-indexing methods, this study shows a possible implementation of a researcher tool supporting custom visualizations and the faceted search of interview collections.
Details
Keywords
This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS…
Abstract
Purpose
This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS registry, facilitating integrated search alongside KOSs from various languages and regions.
Design/methodology/approach
This research involved collecting and organizing KOS information through three primary steps. The initial phase involved finding KOSs from Web search results, supplemented by the Korea ON-line E-Procurement System (KONEPS) and the National R&D Integrated Notification Service. After obtaining these KOSs, they were enriched by structuring contextual meta information using Basic Register of Thesauri, Ontologies and Classification (BARTOC) metadata elements and established dedicated media wiki pages for each. Finally, the KOSs were linked to the multilingual KOS registry, BARTOC, ensuring seamless integration with KOSs from various languages and regions and creating connections between each registry entry and its associated KOS wiki page.
Findings
The research findings revealed several insights, as follows: (1) importance of a stable source for collecting KOS: no national body currently oversees KOS registration, underscoring the need for a systematic approach to collect dispersed KOSs. For Korean KOSs (K-KOSs), KONEPS and National R&D Integrated Notification Service are effective data sources. (2) Importance of enhanced metadata: merely collecting KOSs were not enough. Enhanced metadata bridges access gaps and dedicated wiki pages aid user identification and understanding. (3) Observations from multilingual registry uploads: When adding KOSs to a multilingual registry, similarities were observed across languages and regions. Recognizing this, the K-KOSs were linked with their international counterparts, fostering potential global collaboration.
Research limitations/implications
Due to the absence of a dedicated KOS registry agency, the study might have missed KOSs from certain fields or potentially over-collected from others. Furthermore, this study primarily focused on K-KOSs and their integration into the BARTOC registry, which might influence the methods and perspectives on collecting and establishing links among analogous KOSs in the registry.
Originality/value
This research pursued a stable method to detect KOS development and revisions across various fields. To facilitate this, we used the integrated e-procurement and R&D notification system and added meta information to aid in the identification and understanding of KOSs, which includes media wiki pages. Furthermore, link information was provided between the BARTOC registry and the Korean KOS websites and media wiki pages.
Details
Keywords
Gema Bueno de la Fuente, Carmen Agustín-Lacruz, Mariângela Spotti Lopes Fujita and Ana Lúcia Terra
The purpose of this study is to analyse the recommendations on knowledge organisation from guidelines, policies and procedure manuals of a sample of institutional repositories and…
Abstract
Purpose
The purpose of this study is to analyse the recommendations on knowledge organisation from guidelines, policies and procedure manuals of a sample of institutional repositories and networks within the Latin American area and observe the level of follow-up of international guidelines.
Design/methodology/approach
Presented is an exploratory and descriptive study of repositories’ professional documents. This study comprised four steps: definition of convenience sample; development of data codebook; coding of data; and analysis of data and conclusions drawing. The convenience sample includes representative sources at three levels: local institutional repositories, national aggregators and international network and aggregators. The codebook gathers information from the repositories’ sample, such as institutional rules and procedure manuals openly available, or recommendations on the use of controlled vocabularies.
Findings
The results indicate that at the local repository level, the use of controlled vocabularies is not regulated, leaving the choice of terms to the authors’ discretion. It results in a set of unstructured keywords, not standardised terms, mixing subject terms with other authorities on persons, institutions or places. National aggregators do not regulate these issues either and limit to pointing to international guidelines and policies, which simply recommend the use of controlled vocabularies, using URIs to facilitate interoperability.
Originality/value
The originality of this study lies in identifying how the principles of knowledge organisation are effectively applied by institutional repositories, at local, national and international levels.
Details
Keywords
Romildo Silva, Rui Pedro Marques and Helena Inácio
The purpose of this study is to identify the possible efficiency gains in using tokenization for the execution of public expenditure on governmental investments.
Abstract
Purpose
The purpose of this study is to identify the possible efficiency gains in using tokenization for the execution of public expenditure on governmental investments.
Design/methodology/approach
Through design science research methodology, the exploratory research produced a tokenized prototype in the blockchain, through the Ernst and Young OpsChain traceability solution, allowing automated processes in the stages of public expense. A focus group composed of auditors from the public sector evaluated the possibility of improving the quality of information available in the audited entities, where the tokens created represent and register the actions of public agents in the blockchain Polygon.
Findings
The consensus of the experts in the focus group indicated that the use of tokenization could improve the quality of the information, since the possibility of recording the activities of public agents in the metadata of the tokens at each stage of the execution of the expenditure allows the audited entities the advantages of the information recorded on the blockchain, according to the following ranking: first the immutability of audited data, followed by reliability, transparency, accessibility and efficiency of data structures.
Originality/value
This research makes an empirical contribution to the real use of tokenization in blockchain technology to the public sector through a value chain in which tokens were created and moved between the wallets of public agents to represent, register and track the operations regarding public expense execution.
Details