Search results

1 – 10 of over 1000
Article
Publication date: 30 March 2012

José L. Navarro‐Galindo and José Samos

Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue…

Abstract

Purpose

Nowadays, the use of WCMS (web content management systems) is widespread. The conversion of this infrastructure into its semantic equivalent (semantic WCMS) is a critical issue, as this enables the benefits of the semantic web to be extended. The purpose of this paper is to present a FLERSA (Flexible Range Semantic Annotation) for flexible range semantic annotation.

Design/methodology/approach

A FLERSA is presented as a user‐centred annotation tool for Web content expressed in natural language. The tool has been built in order to illustrate how a WCMS called Joomla! can be converted into its semantic equivalent.

Findings

The development of the tool shows that it is possible to build a semantic WCMS through a combination of semantic components and other resources such as ontologies and emergence technologies, including XML, RDF, RDFa and OWL.

Practical implications

The paper provides a starting‐point for further research in which the principles and techniques of the FLERSA tool can be applied to any WCMS.

Originality/value

The tool allows both manual and automatic semantic annotations, as well as providing enhanced search capabilities. For manual annotation, a new flexible range markup technique is used, based on the RDFa standard, to support the evolution of annotated Web documents more effectively than XPointer. For automatic annotation, a hybrid approach based on machine learning techniques (Vector‐Space Model + n‐grams) is used to determine the concepts that the content of a Web document deals with (from an ontology which provides a taxonomy), based on previous annotations that are used as a training corpus.

Article
Publication date: 14 January 2021

Xiaoguang Wang, Ningyuan Song, Xuemei Liu and Lei Xu

To meet the emerging demand for fine-grained annotation and semantic enrichment of cultural heritage images, this paper proposes a new approach that can transcend the boundary of…

709

Abstract

Purpose

To meet the emerging demand for fine-grained annotation and semantic enrichment of cultural heritage images, this paper proposes a new approach that can transcend the boundary of information organization theory and Panofsky's iconography theory.

Design/methodology/approach

After a systematic review of semantic data models for organizing cultural heritage images and a comparative analysis of the concept and characteristics of deep semantic annotation (DSA) and indexing, an integrated DSA framework for cultural heritage images as well as its principles and process was designed. Two experiments were conducted on two mural images from the Mogao Caves to evaluate the DSA framework's validity based on four criteria: depth, breadth, granularity and relation.

Findings

Results showed the proposed DSA framework included not only image metadata but also represented the storyline contained in the images by integrating domain terminology, ontology, thesaurus, taxonomy and natural language description into a multilevel structure.

Originality/value

DSA can reveal the aboutness, ofness and isness information contained within images, which can thus meet the demand for semantic enrichment and retrieval of cultural heritage images at a fine-grained level. This method can also help contribute to building a novel infrastructure for the increasing scholarship of digital humanities.

Details

Journal of Documentation, vol. 77 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 25 October 2021

Jinju Chen and Shiyan Ou

The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and…

Abstract

Purpose

The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and knowledge discovery.

Design/Methodology/Approach

After a review and comparison of the existing semantic annotation models for images and a deep analysis of the characteristics of the content of images, a multi-dimensional and hierarchical general semantic annotation framework for digital images was proposed. On this basis, taking histories images, advertising images and biomedical images as examples, by integrating the characteristics of images in these specific domains with related domain knowledge, the general semantic annotation framework for digital images was customized to form a domain annotation ontology for the images in a specific domain. The application of semantic annotation of digital images, such as semantic retrieval, visual analysis and semantic reuse, were also explored.

Findings

The results showed that the semantic annotation framework for digital images constructed in this paper provided a solution for the semantic organization of the content of images. On this basis, deep knowledge services such as semantic retrieval, visual analysis can be provided.

Originality/Value

The semantic annotation framework for digital images can reveal the fine-grained semantics in a multi-dimensional and hierarchical way, which can thus meet the demand for enrichment and retrieval of digital images.

Details

The Electronic Library , vol. 39 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 28 September 2012

Dimitris Kanellopoulos

This paper aims to propose a system for the semantic annotation of audio‐visual media objects, which are provided in the documentary domain. It presents the system's architecture…

Abstract

Purpose

This paper aims to propose a system for the semantic annotation of audio‐visual media objects, which are provided in the documentary domain. It presents the system's architecture, a manual annotation tool, an authoring tool and a search engine for the documentary experts. The paper discusses the merits of a proposed approach of evolving semantic network as the basis for the audio‐visual content description.

Design/methodology/approach

The author demonstrates how documentary media can be semantically annotated, and how this information can be used for the retrieval of the documentary media objects. Furthermore, the paper outlines the underlying XML schema‐based content description structures of the proposed system.

Findings

Currently, a flexible organization of documentary media content description and the related media data is required. Such an organization requires the adaptable construction in the form of a semantic network. The proposed approach provides semantic structures with the capability to change and grow, allowing an ongoing task‐specific process of inspection and interpretation of source material. The approach also provides technical memory structures (i.e. information nodes), which represent the size, duration, and technical format of the physical audio‐visual material of any media type, such as audio, video and 3D animation.

Originality/value

The proposed approach (architecture) is generic and facilitates the dynamic use of audio‐visual material using links, enabling the connection from multi‐layered information nodes to data on a temporal, spatial and spatial‐temporal level. It enables the semantic connection between information nodes using typed relations, thus structuring the information space on a semantic as well as syntactic level. Since the description of media content holds constant for the associated time interval, the proposed system can handle multiple content descriptions for the same media unit and also handle gaps. The results of this research will be valuable not only for documentary experts but for anyone with a need to manage dynamically audiovisual content in an intelligent way.

Article
Publication date: 8 July 2010

Andreas Vlachidis, Ceri Binding, Douglas Tudhope and Keith May

This paper sets out to discuss the use of information extraction (IE), a natural language‐processing (NLP) technique to assist “rich” semantic indexing of diverse archaeological…

903

Abstract

Purpose

This paper sets out to discuss the use of information extraction (IE), a natural language‐processing (NLP) technique to assist “rich” semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic‐aware “rich” indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project.

Design/methodology/approach

The paper proposes use of the English Heritage extension (CRM‐EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology‐Oriented Information Extraction process. The process of semantic indexing is based on a rule‐based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules.

Findings

Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic‐aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms.

Originality/value

The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as “Grey Literature”, from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.

Details

Aslib Proceedings, vol. 62 no. 4/5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 30 August 2013

Vanessa El‐Khoury, Martin Jergler, Getnet Abebe Bayou, David Coquil and Harald Kosch

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the…

Abstract

Purpose

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the object level. The authors address these requirements by proposing semantic video content annotation tool (SVCAT) for structural and high‐level semantic video annotation. SVCAT is a semi‐automatic MPEG‐7 standard compliant annotation tool, which produces metadata according to a new object‐based video content model introduced in this work. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. The paper aims to discuss these issues.

Design/methodology/approach

A systematic keyframes classification into ImageNet categories is used as the basis for automatic concept detection in temporal units. This is then followed by an object tracking algorithm to get exact spatial information about objects.

Findings

Experimental results showed that SVCAT is able to provide accurate object level video metadata.

Originality/value

The new contribution in this paper introduces an approach of using ImageNet to get shot level annotations automatically. This approach assists video annotators significantly by minimizing the effort required to locate salient objects in the video.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 June 2015

Quang-Minh Nguyen and Tuan-Dung Cao

The purpose of this paper is to propose an automatic method to generate semantic annotations of football transfer in the news. The current automatic news integration systems on…

Abstract

Purpose

The purpose of this paper is to propose an automatic method to generate semantic annotations of football transfer in the news. The current automatic news integration systems on the Web are constantly faced with the challenge of diversity, heterogeneity of sources. The approaches for information representation and storage based on syntax have some certain limitations in news searching, sorting, organizing and linking it appropriately. The models of semantic representation are promising to be the key to solving these problems.

Design/methodology/approach

The approach of the author leverages Semantic Web technologies to improve the performance of detection of hidden annotations in the news. The paper proposes an automatic method to generate semantic annotations based on named entity recognition and rule-based information extraction. The authors have built a domain ontology and knowledge base integrated with the knowledge and information management (KIM) platform to implement the former task (named entity recognition). The semantic extraction rules are constructed based on defined language models and the developed ontology.

Findings

The proposed method is implemented as a part of the sport news semantic annotations-generating prototype BKAnnotation. This component is a part of the sport integration system based on Web Semantics BKSport. The semantic annotations generated are used for improving features of news searching – sorting – association. The experiments on the news data from SkySport (2014) channel showed positive results. The precisions achieved in both cases, with and without integration of the pronoun recognition method, are both over 80 per cent. In particular, the latter helps increase the recall value to around 10 per cent.

Originality/value

This is one of the initial proposals in automatic creation of semantic data about news, football news in particular and sport news in general. The combination of ontology, knowledge base and patterns of language model allows detection of not only entities with corresponding types but also semantic triples. At the same time, the authors propose a pronoun recognition method using extraction rules to improve the relation recognition process.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 14 June 2013

Bojan Božić and Werner Winiwarter

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community…

Abstract

Purpose

The purpose of this paper is to present a showcase of semantic time series processing which demonstrates how this technology can improve time series processing and community building by the use of a dedicated language.

Design/methodology/approach

The authors have developed a new semantic time series processing language and prepared showcases to demonstrate its functionality. The assumption is an environmental setting with data measurements from different sensors to be distributed to different groups of interest. The data are represented as time series for water and air quality, while the user groups are, among others, the environmental agency, companies from the industrial sector and legal authorities.

Findings

A language for time series processing and several tools to enrich the time series with meta‐data and for community building have been implemented in Python and Java. Also a GUI for demonstration purposes has been developed in PyQt4. In addition, an ontology for validation has been designed and a knowledge base for data storage and inference was set up. Some important features are: dynamic integration of ontologies, time series annotation, and semantic filtering.

Research limitations/implications

This paper focuses on the showcases of time series semantic language (TSSL), but also covers technical aspects and user interface issues. The authors are planning to develop TSSL further and evaluate it within further research projects and validation scenarios.

Practical implications

The research has a high practical impact on time series processing and provides new data sources for semantic web applications. It can also be used in social web platforms (especially for researchers) to provide a time series centric tagging and processing framework.

Originality/value

The paper presents an extended version of the paper presented at iiWAS2012.

Details

International Journal of Web Information Systems, vol. 9 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 November 2023

Julaine Clunis

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…

Abstract

Purpose

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.

Design/methodology/approach

A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.

Findings

The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.

Originality/value

The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 8 August 2008

A.R.D. Prasad and Nabonita Guha

The purpose of this paper is to show that concept naming alone in document annotation is not sufficient to convey the thought content of the information resource. The paper…

1228

Abstract

Purpose

The purpose of this paper is to show that concept naming alone in document annotation is not sufficient to convey the thought content of the information resource. The paper presents an outline of semantic document annotation which combines two major processes: facet analysis and concept categorisation. This is also an effort to show how RDF schema can be designed and implemented so that the properties of the schema are able to express the basic structure of the subject matter of the resource.

Design/methodology/approach

This paper presents a methodology for representing the subject matter of a document in terms of RDF. For the purposes of faceted subject annotation, it has developed an extended RDF schema for simple knowledge organisation system (SKOS). The facets and relationships of the faceted subject indexing language postulate‐based permuted subject indexing system (POPSI) have been transformed into RDFS classes. The elementary categories of POPSI form the property classes in the POPSI/RDF Schema. These property classes have been used to formulate the subject description of a document.

Findings

The subject annotation of a document using this schema expresses all the components of the thought content of an information resource.

Practical implications

The examples given in this paper show the applicability of this schema in describing resources in web directories and annotating scholarly documents in digital libraries. In a broader perspective, this provides a methodology for formulating the subject metadata of web resources. This schema helps in formulating the subject string(s) for a resource outlining the skeleton structure of its thought content.

Originality/value

SKOS has been developed as an RDF schema representation of the traditional knowledge organisation systems. But the schema has limited room to accommodate subject indexing languages. The present schema extends the SKOS schema to accommodate the representation of faceted subject indexing languages. The faceted subject annotation system has been adopted for the very reason that it has precedence over the enumerated classification systems, controlled vocabulary lists, etc. The potential to describe the specific subject of the document with more accuracy and representation of context gives the faceted subject indexing languages strength to make the subject description explicit and machine processible.

Details

Online Information Review, vol. 32 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 1000