Search results
1 – 10 of 32Apostolos Vlachos, Maria Perifanou and Anastasios A. Economides
The purpose of this paper is to review ontologies and data models currently in use for augmented reality (AR) applications, in the cultural heritage (CH) domain, specifically in…
Abstract
Purpose
The purpose of this paper is to review ontologies and data models currently in use for augmented reality (AR) applications, in the cultural heritage (CH) domain, specifically in an urban environment. The aim is to see the current trends in ontologies and data models used and investigate their applications in real world scenarios. Some special cases of applications or ontologies are also discussed, as being interesting enough to merit special consideration.
Design/methodology/approach
A search using Google Scholar, Scopus, ScienceDirect and IEEE Xplore was done in order to find articles that describe ontologies and data models in AR CH applications. The authors identified the articles that analyze the use of ontologies and/or data models, as well as articles that were deemed to be of special interest.
Findings
This review found that CIDOC-CRM is the most popular ontology closely followed by Historical Context Ontology (HiCO). Also, a combination of current ontologies seems to be the most complete way to fully describe a CH object or site. A layered ontology model is suggested, which can be expanded according to the specific project.
Originality/value
This study provides an overview of ontologies and data models for AR CH applications in urban environments. There are several ontologies currently in use in the CH domain, with none having been universally adopted, while new ontologies or extensions to existing ones are being created, in the attempt to fully describe a CH object or site. Also, this study suggests a combination of popular ontologies in a multi-layer model.
Details
Keywords
Morteza Mohammadi Ostani, Jafar Ebadollah Amoughin and Mohadeseh Jalili Manaf
This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European…
Abstract
Purpose
This study aims to adjust Thesis-type properties on Schema.org using metadata models and standards (MS) (Bibframe, electronic thesis and dissertations [ETD]-MS, Common European Research Information Format [CERIF] and Dublin Core [DC]) to enrich the Thesis-type properties for better description and processing on the Web.
Design/methodology/approach
This study is applied, descriptive analysis in nature and is based on content analysis in terms of method. The research population consisted of elements and attributes of the metadata model and standards (Bibframe, ETD-MS, CERIF and DC) and Thesis-type properties in the Schema.org. The data collection tool was a researcher-made checklist, and the data collection method was structured observation.
Findings
The results show that the 65 Thesis-type properties and the two levels of Thing and CreativeWork as its parents on Schema.org that corresponds to the elements and attributes of related models and standards. In addition, 12 properties are special to the Thesis type for better comprehensive description and processing, and 27 properties are added to the CreativeWork type.
Practical implications
Enrichment and expansion of Thesis-type properties on Schema.org is one of the practical applications of the present study, which have enabled more comprehensive description and processing and increased access points and visibility for ETDs in the environment Web and digital libraries.
Originality/value
This study has offered some new Thesis type properties and CreativeWork levels on Schema.org. To the best of the authors’ knowledge, this is the first time this issue is investigated.
Details
Keywords
This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…
Abstract
Purpose
This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.
Design/methodology/approach
A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.
Findings
The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.
Originality/value
The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.
Details
Keywords
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…
Abstract
Purpose
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.
Design/methodology/approach
This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.
Findings
This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.
Originality/value
Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.
Details
Keywords
The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible…
Abstract
Purpose
The purpose of this article is to contribute to the digital development and utilization of China’s intangible cultural heritage resources, research on the theft of intangible cultural heritage resources and knowledge integration based on linked data is proposed to promote the standardized description of intangible cultural heritage knowledge and realize the digital dissemination and development of intangible cultural heritage.
Design/methodology/approach
In this study, firstly, the knowledge organization theory and semantic Web technology are used to describe the intangible cultural heritage digital resource objects in metadata specifications. Secondly, the ontology theory and technical methods are used to build a conceptual model of the intangible cultural resources field and determine the concept sets and hierarchical relationships in this field. Finally, the semantic Web technology is used to establish semantic associations between intangible cultural heritage resource knowledge.
Findings
The study findings indicate that the knowledge organization of intangible cultural heritage resources constructed in this study provides a solution for the digital development of intangible cultural heritage in China. It also provides semantic retrieval with better knowledge granularity and helps to visualize the knowledge content of intangible cultural heritage.
Originality/value
This study summarizes and provides significant theoretical and practical value for the digital development of intangible cultural heritage and the resource description and knowledge fusion of intangible cultural heritage can help to discover the semantic relationship of intangible cultural heritage in multiple dimensions and levels.
Details
Keywords
Ahmad Nadzri Mohamad, Allan Sylvester and Jennifer Campbell-Meier
This study aimed to develop a taxonomy of research areas in open government data (OGD) through a bibliometric mapping tool and a qualitative analysis software.
Abstract
Purpose
This study aimed to develop a taxonomy of research areas in open government data (OGD) through a bibliometric mapping tool and a qualitative analysis software.
Design/methodology/approach
In this study, the authors extracted metadata of 442 documents from a bibliographic database. The authors used a bibliometric mapping tool for familiarization with the literature. After that, the authors used qualitative analysis software to develop taxonomy.
Findings
This paper developed taxonomy of OGD with three research areas: implementation and management, architecture, users and utilization. These research areas are further analyzed into seven topics and twenty-eight subtopics. The present study extends Charalabidis et al. (2016) taxonomy by adding two research topics, namely the adoption factors and barriers of OGD implementations and OGD ecosystems. Also, the authors include artificial intelligence in the taxonomy as an emerging research interest in the literature. The authors suggest four directions for future research: indigenous knowledge in open data, open data at local governments, development of OGD-specific theories and user studies in certain research themes.
Practical implications
Early career researchers and doctoral students can use the taxonomy to familiarize themselves with the literature. Also, established researchers can use the proposed taxonomy to inform future research. Taxonomy-building procedures in this study are applicable to other fields.
Originality/value
This study developed a novel taxonomy of research areas in OGD. Taxonomy building is significant because there is insufficient taxonomy of research areas in this discipline. Also, conceptual knowledge through taxonomy creation is a basis for theorizing and theory-building for future studies.
Details
Keywords
Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…
Abstract
Purpose
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.
Design/methodology/approach
This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.
Findings
This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.
Originality/value
The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
Details
Keywords
The interpretation of any emerging form or period in art history was never a trivial task. However, in the case of digital art, technology, becoming an integral part, multiplied…
Abstract
Purpose
The interpretation of any emerging form or period in art history was never a trivial task. However, in the case of digital art, technology, becoming an integral part, multiplied the complexity of describing, systematizing and evaluating it. This article investigates the most common metadata standards for the documentation of art as a broad category and suggests possible next steps toward an extended metadata standard for digital art.
Design/methodology/approach
Describing several techno-cultural phenomena formed in the last decade, manifesting the extendibility of digital art (its ability to be easily extended across multiple modalities), the article, at first, points to the long overdue need to re-evaluate the standards around it. Then it suggests a deeper analysis through a comparative study. In the scope of the study three artworks, The Arnolfini Portrait (Jan van Eyck), an iconic example of the early Renaissance, The World's First Collaborative Sentence (Douglas Davis), a classic example of early Internet art and Fake It Till You Make It (Maya Man), a prominent example of the blockchain art, are examined following the structure of the VRA Core 4.0 standard.
Findings
The comparative study demonstrates that digital art is more multi-semantic than traditional physical art, and requires new taxonomies as well as approaches for data acquisition.
Originality/value
Acknowledging that digital art simply has not yet evolved to the stage of being systematically collected by cultural institutions for documentation, curation and preservation, but otherwise, in the past few years, it has been at the front-center of social, economic and technological trends, the article suggests looking for hints on the future-proof extended metadata standard in some of those trends.
Details
Keywords
Fábio Matoseiro Dinis, Raquel Rodrigues and João Pedro da Silva Poças Martins
Despite the technological paradigm shift presented to the architecture, engineering, construction and operations sector (AECO), the full-fledged acceptance of the building…
Abstract
Purpose
Despite the technological paradigm shift presented to the architecture, engineering, construction and operations sector (AECO), the full-fledged acceptance of the building information modelling (BIM) methodology has been slower than initially anticipated. Indeed, this study aims to acknowledge the need for increasing supportive technologies enabling the use of BIM, attending to available human resources, their requirements and their tasks.
Design/methodology/approach
A complete case study is described, including the development process centred on design science research methodology followed by the usability assessment procedure validated by construction projects facility management operational staff.
Findings
Results show that participants could interact with BIM using openBIM processes and file formats naturally, as most participants reached an efficiency level close to that expected for users already familiar with the interface (i.e. high-efficiency values). These results are consistent with the reported perceived satisfaction and analysis of participants’ discourses through 62 semi-structured interviews.
Originality/value
The contributions of the present study are twofold: a proposal for a virtual reality openBIM framework is presented, particularly for the semantic enrichment of BIM models, and a methodology for evaluating the usability of this type of system in the AECO sector.
Details