Search results

1 – 10 of over 12000
Open Access
Article
Publication date: 28 April 2022

Manuel Pedro Rodríguez Bolívar and Laura Alcaide Muñoz

This study aims to conduct performance and clustering analyses with the help of Digital Government Reference Library (DGRL) v16.6 database examining the role of emerging…

2150

Abstract

Purpose

This study aims to conduct performance and clustering analyses with the help of Digital Government Reference Library (DGRL) v16.6 database examining the role of emerging technologies (ETs) in public services delivery.

Design/methodology/approach

VOSviewer and SciMAT techniques were used for clustering and mapping the use of ETs in the public services delivery. Collecting documents from the DGRL v16.6 database, the paper uses text mining analysis for identifying key terms and trends in e-Government research regarding ETs and public services.

Findings

The analysis indicates that all ETs are strongly linked to each other, except for blockchain technologies (due to its disruptive nature), which indicate that ETs can be, therefore, seen as accumulative knowledge. In addition, on the whole, findings identify four stages in the evolution of ETs and their application to public services: the “electronic administration” stage, the “technological baseline” stage, the “managerial” stage and the “disruptive technological” stage.

Practical implications

The output of the present research will help to orient policymakers in the implementation and use of ETs, evaluating the influence of these technologies on public services.

Social implications

The research helps researchers to track research trends and uncover new paths on ETs and its implementation in public services.

Originality/value

Recent research has focused on the need of implementing ETs for improving public services, which could help cities to improve the citizens’ quality of life in urban areas. This paper contributes to expanding the knowledge about ETs and its implementation in public services, identifying trends and networks in the research about these issues.

Details

Information Technology & People, vol. 37 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 6 December 2023

Qing Fan

The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible…

Abstract

Purpose

The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible cultural heritage resources and knowledge integration based on linked data is proposed to promote the standardized description of intangible cultural heritage knowledge and realize the digital dissemination and development of intangible cultural heritage.

Design/methodology/approach

In this study, firstly, the knowledge organization theory and semantic Web technology are used to describe the intangible cultural heritage digital resource objects in metadata specifications. Secondly, the ontology theory and technical methods are used to build a conceptual model of the intangible cultural resources field and determine the concept sets and hierarchical relationships in this field. Finally, the semantic Web technology is used to establish semantic associations between intangible cultural heritage resource knowledge.

Findings

The study findings indicate that the knowledge organization of intangible cultural heritage resources constructed in this study provides a solution for the digital development of intangible cultural heritage in China. It also provides semantic retrieval with better knowledge granularity and helps to visualize the knowledge content of intangible cultural heritage.

Originality/value

This study summarizes and provides significant theoretical and practical value for the digital development of intangible cultural heritage and the resource description and knowledge fusion of intangible cultural heritage can help to discover the semantic relationship of intangible cultural heritage in multiple dimensions and levels.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 8 January 2024

Morteza Mohammadi Ostani, Jafar Ebadollah Amoughin and Mohadeseh Jalili Manaf

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European…

Abstract

Purpose

This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European Research Information Format [CERIF] and Dublin Core [DC]) to enrich the Thesis-type properties for better description and processing on the Web.

Design/methodology/approach

This study is applied, descriptive analysis in nature and is based on content analysis in terms of method. The research population consisted of elements and attributes of the metadata model and standards (Bibframe, ETD-MS, CERIF and DC) and Thesis-type properties in the Schema.org. The data collection tool was a researcher-made checklist, and the data collection method was structured observation.

Findings

The results show that the 65 Thesis-type properties and the two levels of Thing and CreativeWork as its parents on Schema.org that corresponds to the elements and attributes of related models and standards. In addition, 12 properties are special to the Thesis type for better comprehensive description and processing, and 27 properties are added to the CreativeWork type.

Practical implications

Enrichment and expansion of Thesis-type properties on Schema.org is one of the practical applications of the present study, which have enabled more comprehensive description and processing and increased access points and visibility for ETDs in the environment Web and digital libraries.

Originality/value

This study has offered some new Thesis type properties and CreativeWork levels on Schema.org. To the best of the authors’ knowledge, this is the first time this issue is investigated.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 4 July 2023

Kevin John Burnard

Case study research has been applied across numerous fields and provides an established methodology for exploring and understanding various research contexts. This paper aims to…

Abstract

Purpose

Case study research has been applied across numerous fields and provides an established methodology for exploring and understanding various research contexts. This paper aims to aid in developing methodological rigor by investigating the approaches of establishing validity and reliability.

Design/methodology/approach

Based on a systematic review of relevant literature, this paper catalogs the use of validity and reliability measures within academic publications between 2008 and 2018. The review analyzes case study research across 15 peer-reviewed journals (total of 1,372 articles) and highlights the application of validity and reliability measures.

Findings

The evidence of the systematic literature review suggests that validity measures appear well established and widely reported within case study–based research articles. However, measures and test procedures related to research reliability appear underrepresented within analyzed articles.

Originality/value

As shown by the presented results, there is a need for more significant reporting of the procedures used related to research reliability. Toward this, the features of a robust case study protocol are defined and discussed.

Details

Management Research Review, vol. 47 no. 2
Type: Research Article
ISSN: 2040-8269

Keywords

Article
Publication date: 3 February 2023

Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Abstract

Purpose

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Design/methodology/approach

This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.

Findings

With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.

Originality/value

The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 15 February 2024

Bokolo Anthony Jnr

Presently, existing electric car sharing platforms are based on a centralized architecture which are faced with inadequate trust and pricing issues as these platforms requires an…

Abstract

Purpose

Presently, existing electric car sharing platforms are based on a centralized architecture which are faced with inadequate trust and pricing issues as these platforms requires an intermediary to maintain users’ data and handle transactions between participants. Therefore, this article aims to develop a decentralized peer-to-peer electric car sharing prototype framework that offers trustable and cost transparency.

Design/methodology/approach

This study employs a systematic review and data were collected from the literature and existing technical report documents after which content analysis is carried out to identify current problems and state-of-the-art electric car sharing. A use case scenario was then presented to preliminarily validate and show how the developed prototype framework addresses the trust-lessness in electric car sharing via distributed ledger technologies (DLTs).

Findings

Findings from this study present a use case scenario that depicts how businesses can design and implement a distributed peer-to-peer electric car sharing platforms based on IOTA technology, smart contracts and IOTA eWallet. Main findings from this study unlock the tremendous potential of DLT to foster sustainable road transportation. By employing a token-based approach this study enables electric car sharing that promotes sustainable road transportation.

Practical implications

Practically the developed decentralized prototype framework provides improved cost transparency and fairness guarantees as it is not based on a centralized price management system. The DLT based decentralized prototype framework aids to orchestrate the incentivize monetization and rewarding mechanisms among participants that share their electric cars enabling them to collaborate towards lessening CO2 emissions.

Social implications

The findings advocate that electric vehicle sharing has become an essential component of sustainable road transportation by increasing electric car utilization and decreasing the number of vehicles on the road.

Originality/value

The key novelty of the article is introducing a decentralized prototype framework to be employed to develop an electric car sharing solution without a central control or governance, which improves cost transparency. As compared to prior centralized platforms, the prototype framework employs IOTA technology smart contracts and IOTA eWallet to improve mobility related services.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 9 November 2023

Gustavo Candela, Nele Gabriëls, Sally Chambers, Milena Dobreva, Sarah Ames, Meghan Ferriter, Neil Fitzgerald, Victor Harbo, Katrine Hofmann, Olga Holownia, Alba Irollo, Mahendra Mahey, Eileen Manchester, Thuy-An Pham, Abigail Potter and Ellen Van Keer

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part…

Abstract

Purpose

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part of the collections as data movement, suitable for computational use.

Design/methodology/approach

The checklist was built by synthesising and analysing the results of relevant research literature, articles and studies and the issues and needs obtained in an observational study. The checklist was tested and applied both as a tool for assessing a selection of digital collections made available by galleries, libraries, archives and museums (GLAM) institutions as proof of concept and as a supporting tool for creating collections as data.

Findings

Over the past few years, there has been a growing interest in making available digital collections published by GLAM organisations for computational use. Based on previous work, the authors defined a methodology to build a checklist for the publication of Collections as data. The authors’ evaluation showed several examples of applications that can be useful to encourage other institutions to publish their digital collections for computational use.

Originality/value

While some work on making available digital collections suitable for computational use exists, giving particular attention to data quality, planning and experimentation, to the best of the authors’ knowledge, none of the work to date provides an easy-to-follow and robust checklist to publish collection data sets in GLAM institutions. This checklist intends to encourage small- and medium-sized institutions to adopt the collection as data principles in daily workflows following best practices and guidelines.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

1 – 10 of over 12000