Search results

1 – 10 of over 2000
Article
Publication date: 6 November 2017

Yanti Idaya Aspura M.K. and Shahrul Azman Mohd Noah

The purpose of this study is to reduce the semantic distance by proposing a model for integrating indexes of textual and visual features via a multi-modality ontology and the use…

Abstract

Purpose

The purpose of this study is to reduce the semantic distance by proposing a model for integrating indexes of textual and visual features via a multi-modality ontology and the use of DBpedia to improve the comprehensiveness of the ontology to enhance semantic retrieval.

Design/methodology/approach

A multi-modality ontology-based approach was developed to integrate high-level concepts and low-level features, as well as integrate the ontology base with DBpedia to enrich the knowledge resource. A complete ontology model was also developed to represent the domain of sport news, with image caption keywords and image features. Precision and recall were used as metrics to evaluate the effectiveness of the multi-modality approach, and the outputs were compared with those obtained using a single-modality approach (i.e. textual ontology and visual ontology).

Findings

The results based on ten queries show a superior performance of the multi-modality ontology-based IMR system integrated with DBpedia in retrieving correct images in accordance with user queries. The system achieved 100 per cent precision for six of the queries and greater than 80 per cent precision for the other four queries. The text-based system only achieved 100 per cent precision for one query; all other queries yielded precision rates less than 0.500.

Research limitations/implications

This study only focused on BBC Sport News collection in the year 2009.

Practical implications

The paper includes implications for the development of ontology-based retrieval on image collection.

Originality value

This study demonstrates the strength of using a multi-modality ontology integrated with DBpedia for image retrieval to overcome the deficiencies of text-based and ontology-based systems. The result validates semantic text-based with multi-modality ontology and DBpedia as a useful model to reduce the semantic distance.

Details

The Electronic Library, vol. 35 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 31 July 2007

Peter G.B. Enser, Christine J. Sandom, Jonathon S. Hare and Paul H. Lewis

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval

2126

Abstract

Purpose

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques.

Design/methodology/approach

Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting‐edge automatic annotation techniques which seek to integrate the text‐based and content‐based image retrieval paradigms.

Findings

Evidence from the real‐world practice of image retrieval highlights the existence of a generic‐specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real‐query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic‐specific continuum.

Research limitations/implications

The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered.

Originality/value

The paper offers fresh insights into the challenge of migrating content‐based image retrieval from the laboratory to the operational environment, informed by newly‐assembled, comprehensive, live data.

Details

Journal of Documentation, vol. 63 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 25 October 2021

Jinju Chen and Shiyan Ou

The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and…

Abstract

Purpose

The purpose of this paper is to semantically annotate the content of digital images with the use of Semantic Web technologies and thus facilitate retrieval, integration and knowledge discovery.

Design/Methodology/Approach

After a review and comparison of the existing semantic annotation models for images and a deep analysis of the characteristics of the content of images, a multi-dimensional and hierarchical general semantic annotation framework for digital images was proposed. On this basis, taking histories images, advertising images and biomedical images as examples, by integrating the characteristics of images in these specific domains with related domain knowledge, the general semantic annotation framework for digital images was customized to form a domain annotation ontology for the images in a specific domain. The application of semantic annotation of digital images, such as semantic retrieval, visual analysis and semantic reuse, were also explored.

Findings

The results showed that the semantic annotation framework for digital images constructed in this paper provided a solution for the semantic organization of the content of images. On this basis, deep knowledge services such as semantic retrieval, visual analysis can be provided.

Originality/Value

The semantic annotation framework for digital images can reveal the fine-grained semantics in a multi-dimensional and hierarchical way, which can thus meet the demand for enrichment and retrieval of digital images.

Details

The Electronic Library , vol. 39 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 June 2019

Xiufeng Cheng, Jinqing Yang, Ling Jiang and Anlei Hu

The purpose of this paper is to introduce an interpreting schema and semantic description framework for a collection of images of Xilankapu, a traditional Chinese form of…

Abstract

Purpose

The purpose of this paper is to introduce an interpreting schema and semantic description framework for a collection of images of Xilankapu, a traditional Chinese form of embroidered fabric and brocade artwork.

Design/methodology/approach

First, the authors interpret the artwork of Xilankapu through Gillian Rose’s “four site” theory by presenting how the brocades were made, how the patterns of Xilankapu are classified and the geometrical abstraction of visual images. To further describe the images of this type of brocade, this paper presents semantic descriptions that include objective–non-objective relations and a multi-layered semantic framework. Furthermore, the authors developed corresponding methods for scanning, storage and indexing images for retrieval.

Findings

As exploratory research on describing, preserving and indexing images of Xilankapu in the context of the preservation of cultural heritage, the authors collected 1,000+ images of traditional Xilankapu, classifying and storing some of the images in a database. They developed an index schema that combines concept- and content-based approaches according to the proposed semantic description framework. They found that the framework can describe, store and preserve semantic and non-semantic information of the same image. They relate the findings of this paper to future research directions for the digital preservation of traditional cultural heritages.

Research limitations/implications

The framework has been designed especially for brocade, and it needs to be extended to other types of cultural image.

Originality/value

The semantic description framework can describe connotative semantic information on Xilankapu. It can also assist the later information retrieval work in organizing implicit information about culturally related visual materials.

Details

The Electronic Library , vol. 37 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 27 November 2009

A. Vadivel, Shamik Sural and A.K. Majumdar

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based…

Abstract

Purpose

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based keywords can be generated from web documents to capture semantic information for narrowing down the search space. The combination of keywords and various low‐level features effectively increases the retrieval precision. The purpose of this paper is to propose a dynamic approach for integrating keywords and low‐level features to take advantage of their complementary strengths.

Design/methodology/approach

Image semantics are described using both low‐level features and keywords. The keywords are constructed from the text located in the vicinity of images embedded in HTML documents. Various low‐level features such as colour histograms, texture and composite colour‐texture features are extracted for supplementing keywords.

Findings

The retrieval performance is better than that of various recently proposed techniques. The experimental results show that the integrated approach has better retrieval performance than both the text‐based and the content‐based techniques.

Research limitations/implications

The features of images used for capturing the semantics may not always describe the content.

Practical implications

The indexing mechanism for dynamically growing features is challenging while practically implementing the system.

Originality/value

A survey of image retrieval systems for searching images available on the internet found that no internet search engine can handle both low‐level features and keywords as queries for retrieving images from WWW so this is the first of its kind.

Details

Online Information Review, vol. 33 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 17 April 2007

Chih‐Fong Tsai

The aim of this paper is to examine related studies to identify which retrieval methods are supported by current digital cultural heritage libraries. In this way it is hoped to…

1975

Abstract

Purpose

The aim of this paper is to examine related studies to identify which retrieval methods are supported by current digital cultural heritage libraries. In this way it is hoped to provide a direction for future cultural heritage applications to provide more complete and/or improved retrieval functionality.

Design/methodology/approach

The methodology of this paper is based on introducing the general concept of image‐based retrieval systems as well as their retrieval methods. Then, users' needs are discussed to illustrate the demands of semantic‐based retrieval. After the retrieval methods have been presented, current digital cultural heritage libraries are examined in terms of their supported retrieval methods that allow users to query images.

Findings

Current digital cultural heritage libraries mostly provide only general retrieval methods based on image‐based low‐level features, i.e. query by image contents. Very few consider other retrieval methods such as browsing and semantic‐based retrieval. In addition, none of the current systems provide all possible retrieval methods for users.

Originality/value

This study is the first one to examine image‐based retrieval methods in digital cultural heritage libraries. This study supports the improvement of retrieval functionality for digital cultural heritage libraries in the future.

Details

Online Information Review, vol. 31 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 16 February 2022

Ziming Zeng, Shouqiang Sun, Jingjing Sun, Jie Yin and Yueyan Shen

Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users…

Abstract

Purpose

Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users to efficiently search for similar, relevant and diversified images.

Design/methodology/approach

The convolutional neural network (CNN) model is fine-tuned in the data set of Dunhuang murals. Image features are extracted through the fine-tuned CNN model, and the similarities between different candidate images and the query image are calculated by the dot product. Then, the candidate images are sorted by similarity, and semantic labels are extracted from the most similar image. Ontology semantic distance (OSD) is proposed to match relevant images using semantic labels. Furthermore, the improved DivScore is introduced to diversify search results.

Findings

The results illustrate that the fine-tuned ResNet152 is the best choice to search for similar images at the visual feature level, and OSD is the effective method to search for the relevant images at the semantic level. After re-ranking based on DivScore, the diversification of search results is improved.

Originality/value

This study collects and builds the Dunhuang mural data set and proposes an effective MVS framework for Dunhuang murals to protect and inherit Dunhuang cultural heritage. Similar, relevant and diversified Dunhuang murals are searched to meet different demands.

Details

The Electronic Library , vol. 40 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 14 January 2021

Xiaoguang Wang, Ningyuan Song, Xuemei Liu and Lei Xu

To meet the emerging demand for fine-grained annotation and semantic enrichment of cultural heritage images, this paper proposes a new approach that can transcend the boundary of…

781

Abstract

Purpose

To meet the emerging demand for fine-grained annotation and semantic enrichment of cultural heritage images, this paper proposes a new approach that can transcend the boundary of information organization theory and Panofsky's iconography theory.

Design/methodology/approach

After a systematic review of semantic data models for organizing cultural heritage images and a comparative analysis of the concept and characteristics of deep semantic annotation (DSA) and indexing, an integrated DSA framework for cultural heritage images as well as its principles and process was designed. Two experiments were conducted on two mural images from the Mogao Caves to evaluate the DSA framework's validity based on four criteria: depth, breadth, granularity and relation.

Findings

Results showed the proposed DSA framework included not only image metadata but also represented the storyline contained in the images by integrating domain terminology, ontology, thesaurus, taxonomy and natural language description into a multilevel structure.

Originality/value

DSA can reveal the aboutness, ofness and isness information contained within images, which can thus meet the demand for semantic enrichment and retrieval of cultural heritage images at a fine-grained level. This method can also help contribute to building a novel infrastructure for the increasing scholarship of digital humanities.

Details

Journal of Documentation, vol. 77 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 12 January 2015

Allen C Benson

– The purpose of this paper is to survey the treatment of relationships, relationship expressions and the ways in which they manifest themselves in image descriptions.

Abstract

Purpose

The purpose of this paper is to survey the treatment of relationships, relationship expressions and the ways in which they manifest themselves in image descriptions.

Design/methodology/approach

The term “relationship” is construed in the broadest possible way to include spatial relationships (“to the right of”), temporal (“in 1936,” “at noon”), meronymic (“part of”), and attributive (“has color,” “has dimension”). The intentions of these vaguely delimited categories with image information, image creation, and description in libraries and archives is complex and in need of explanation.

Findings

The review brings into question many generally held beliefs about the relationship problem such as the belief that the semantics of relationships are somehow embedded in the relationship term itself and that image search and retrieval solutions can be found through refinement of word-matching systems.

Originality/value

This review has no hope of systematically examining all evidence in all disciplines pertaining to this topic. It instead focusses on a general description of a theoretical treatment in Library and Information Science.

Details

Journal of Documentation, vol. 71 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 5 May 2023

Nguyen Thi Dinh, Nguyen Thi Uyen Nhi, Thanh Manh Le and Thanh The Van

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the…

Abstract

Purpose

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the KD-Tree structure was proposed.

Design/methodology/approach

A Random Forest structure was built to classify the objects on each image on the basis of the balanced multibranch KD-Tree structure. From that purpose, a KD-Tree structure was generated by the Random Forest to retrieve a set of similar images for an input image. A KD-Tree structure is applied to determine a relationship word at leaves to extract the relationship between objects on an input image. An input image content is described based on class names and relationships between objects.

Findings

A model of image retrieval and image content extraction was proposed based on the proposed theoretical basis; simultaneously, the experiment was built on multi-object image datasets including Microsoft COCO and Flickr with an average image retrieval precision of 0.9028 and 0.9163, respectively. The experimental results were compared with those of other works on the same image dataset to demonstrate the effectiveness of the proposed method.

Originality/value

A balanced multibranch KD-Tree structure was built to apply to relationship classification on the basis of the original KD-Tree structure. Then, KD-Tree Random Forest was built to improve the classifier performance and retrieve a set of similar images for an input image. Concurrently, the image content was described in the process of combining class names and relationships between objects.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 2000