Search results
1 – 10 of 118Marina Salse, Javier Guallar-Delgado, Núria Jornet-Benito, Maria Pilar Mateo Bretos and Josep Oriol Silvestre-Canut
The purpose of this study is to determine which metadata schemas are used in the museums and university collections of the main universities in Spain and other European countries…
Abstract
Purpose
The purpose of this study is to determine which metadata schemas are used in the museums and university collections of the main universities in Spain and other European countries. Although libraries and archives are also university memory institutions (according to a Galleries, Libraries, Archives and Museums perspective), their collections are not included in this study because their metadata systems are highly standardized and their inclusion would, therefore, skew our understanding of the diverse realities that the study aims to capture.
Design/methodology/approach
The analysis has three components. The first is a bibliographic review based on Web of Science. The second is a direct survey of the individuals responsible for university collections to understand their internal work and documentation systems. Finally, the results obtained are complemented by an analysis of collective university heritage portals in Europe.
Findings
The results of this study confirmed the hypothesis that isolation and a lack of resources are still major issues in many cases. Increasing digitalization and the desire to participate in content aggregation systems are forcing change, although the responsibility for that change at universities is still vague.
Originality/value
Universities, particularly those with a long history, have an important heritage whose parts are often scattered or hidden. Although many contemporary academic publications have focused on the dissemination of university collections, this study focuses on the representation of information based on the conviction that good metadata are essential for dissemination.
Details
Keywords
Morteza Mohammadi Ostani, Jafar Ebadollah Amoughin and Mohadeseh Jalili Manaf
This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European…
Abstract
Purpose
This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European Research Information Format [CERIF] and Dublin Core [DC]) to enrich the Thesis-type properties for better description and processing on the Web.
Design/methodology/approach
This study is applied, descriptive analysis in nature and is based on content analysis in terms of method. The research population consisted of elements and attributes of the metadata model and standards (Bibframe, ETD-MS, CERIF and DC) and Thesis-type properties in the Schema.org. The data collection tool was a researcher-made checklist, and the data collection method was structured observation.
Findings
The results show that the 65 Thesis-type properties and the two levels of Thing and CreativeWork as its parents on Schema.org that corresponds to the elements and attributes of related models and standards. In addition, 12 properties are special to the Thesis type for better comprehensive description and processing, and 27 properties are added to the CreativeWork type.
Practical implications
Enrichment and expansion of Thesis-type properties on Schema.org is one of the practical applications of the present study, which have enabled more comprehensive description and processing and increased access points and visibility for ETDs in the environment Web and digital libraries.
Originality/value
This study has offered some new Thesis type properties and CreativeWork levels on Schema.org. To the best of the authors’ knowledge, this is the first time this issue is investigated.
Details
Keywords
Gustavo Candela, Nele Gabriëls, Sally Chambers, Milena Dobreva, Sarah Ames, Meghan Ferriter, Neil Fitzgerald, Victor Harbo, Katrine Hofmann, Olga Holownia, Alba Irollo, Mahendra Mahey, Eileen Manchester, Thuy-An Pham, Abigail Potter and Ellen Van Keer
The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part…
Abstract
Purpose
The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part of the collections as data movement, suitable for computational use.
Design/methodology/approach
The checklist was built by synthesising and analysing the results of relevant research literature, articles and studies and the issues and needs obtained in an observational study. The checklist was tested and applied both as a tool for assessing a selection of digital collections made available by galleries, libraries, archives and museums (GLAM) institutions as proof of concept and as a supporting tool for creating collections as data.
Findings
Over the past few years, there has been a growing interest in making available digital collections published by GLAM organisations for computational use. Based on previous work, the authors defined a methodology to build a checklist for the publication of Collections as data. The authors’ evaluation showed several examples of applications that can be useful to encourage other institutions to publish their digital collections for computational use.
Originality/value
While some work on making available digital collections suitable for computational use exists, giving particular attention to data quality, planning and experimentation, to the best of the authors’ knowledge, none of the work to date provides an easy-to-follow and robust checklist to publish collection data sets in GLAM institutions. This checklist intends to encourage small- and medium-sized institutions to adopt the collection as data principles in daily workflows following best practices and guidelines.
Details
Keywords
Xiaojuan Liu, Yinrong Pan and Yutong Han
There is a wealth of value hidden in regional cultural heritage, but its preservation status is not optimistic. This study introduces a method that focuses on the inherent…
Abstract
Purpose
There is a wealth of value hidden in regional cultural heritage, but its preservation status is not optimistic. This study introduces a method that focuses on the inherent cultural value of regional cultural heritage to preserve it by value construction and release.
Design/methodology/approach
Based on the great value of regional cultural heritage due to spatial adjacency and temporal continuity, this paper focuses on its inherent cultural value to explore the preservation path and chooses Shichahai cultural heritage digital resources for a case study. This paper draws lessons from the narrative method of ancient Chinese historiography, constructs a cultural space and tells cultural stories. A linked data organization model for digital resources is created to construct a conceptual cultural space. Then, the space is materialized by linked dataset creation. The authors tell cultural stories discovered from the space, which are presented by various user interfaces using visualization technologies.
Findings
A cultural space promotes the development of a fine-grained description of regional cultural heritage and aids in relationship discovery to enhance the value construction ability. Additionally, storytelling via interactive user interfaces is helpful in the utilization and dissemination of knowledge extracted from a cultural space and enhances the value release of regional cultural heritage. In this way, a path with the inherent cultural value of regional cultural heritage as the core is established, and preservation is achieved.
Originality/value
This study focuses on the inherent cultural value of regional cultural heritage and proposes a new path to preserve these resources. This approach will enrich research on the preservation of regional cultural heritage and contribute to the construction and release of its cultural value.
Details
Keywords
Florian Rupp, Benjamin Schnabel and Kai Eckert
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…
Abstract
Purpose
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.
Design/methodology/approach
In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.
Findings
The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.
Practical implications
Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.
Originality/value
With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.
Details
Keywords
The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible…
Abstract
Purpose
The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible cultural heritage resources and knowledge integration based on linked data is proposed to promote the standardized description of intangible cultural heritage knowledge and realize the digital dissemination and development of intangible cultural heritage.
Design/methodology/approach
In this study, firstly, the knowledge organization theory and semantic Web technology are used to describe the intangible cultural heritage digital resource objects in metadata specifications. Secondly, the ontology theory and technical methods are used to build a conceptual model of the intangible cultural resources field and determine the concept sets and hierarchical relationships in this field. Finally, the semantic Web technology is used to establish semantic associations between intangible cultural heritage resource knowledge.
Findings
The study findings indicate that the knowledge organization of intangible cultural heritage resources constructed in this study provides a solution for the digital development of intangible cultural heritage in China. It also provides semantic retrieval with better knowledge granularity and helps to visualize the knowledge content of intangible cultural heritage.
Originality/value
This study summarizes and provides significant theoretical and practical value for the digital development of intangible cultural heritage and the resource description and knowledge fusion of intangible cultural heritage can help to discover the semantic relationship of intangible cultural heritage in multiple dimensions and levels.
Details
Keywords
Shaodan Sun, Jun Deng and Xugong Qin
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…
Abstract
Purpose
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.
Design/methodology/approach
According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.
Findings
This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.
Originality/value
Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.
Details
Keywords
Sudarsan Desul, Rabindra Kumar Mahapatra, Raj Kishore Patra, Mrutyunjay Sethy and Neha Pandey
The purpose of this study is to review the application of semantic technologies in cultural heritage (STCH) to achieve interoperability and enable advanced applications like 3D…
Abstract
Purpose
The purpose of this study is to review the application of semantic technologies in cultural heritage (STCH) to achieve interoperability and enable advanced applications like 3D modeling and augmented reality by enhancing the understanding and appreciation of CH. The study aims to identify the trends and patterns in using STCH and provide insights for scholars and policymakers on future research directions.
Design/methodology/approach
This research paper uses a bibliometric study to analyze the articles published in Scopus and Web of Science (WoS)-indexed journals from 1999 to 2022 on STCH. A total of 580 articles were analyzed using the Biblioshiny package in RStudio.
Findings
The study reveals a substantial increase in STCH publications since 2008, with Italy leading in contributions. Key research areas such as ontologies, semantic Web, linked data and digital humanities are extensively explored, highlighting their significance and characteristics within the STCH research domain.
Research limitations/implications
This study only analyzed articles published in Scopus and WoS-indexed journals in the English language. Further research could include articles published in other languages and non-indexed journals.
Originality/value
This study extensively analyses the research published on STCH over the past 23 years, identifying the leading authors, institutions, countries and top research topics. The findings provide guidelines for future research direction and contribute to the literature on promoting, preserving and managing the CH globally.
Details
Keywords
Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…
Abstract
Purpose
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.
Design/methodology/approach
On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.
Findings
The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.
Originality/value
The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.
Details
Keywords
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of…
Abstract
Purpose
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of memory materials, encourage personalized sense-making and extract, manage and share the ever-growing surrounding knowledge. Audiovisual (AV) content, with its growing importance and popularity, is less explored on that end than texts and images. This paper examines the trend of datafication in AV archives and answers the critical question, “What to extract from AV materials and why?”.
Design/methodology/approach
This study roots in a comprehensive state-of-the-art review of digital methods and curatorial practices in AV archives. The thinking model for mapping AV archive data to purposes is based on pre-existing models for understanding multimedia content and metadata standards.
Findings
The thinking model connects AV content descriptors (data perspective) and purposes (curatorial perspective) and provides a theoretical map of how information extracted from AV archives should be fused and embedded for memory institutions. The model is constructed by looking into the three broad dimensions of audiovisual content – archival, affective and aesthetic, social and historical.
Originality/value
This paper contributes uniquely to the intersection of computational archives, audiovisual content and public sense-making experiences. It provides updates and insights to work towards datafied AV archives and cope with the increasing needs in the sense-making end using AV archives.
Details