Search results

1 – 10 of 54
Article
Publication date: 16 August 2019

Neda Tadi Bani and Shervan Fekri-Ershad

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is…

Abstract

Purpose

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is proposed based on a combination of texture and colour information. The main purpose of this paper is to propose a new content based image retrieval approach using combination of color and texture information in spatial and transform domains jointly.

Design/methodology/approach

Various methods are provided for image retrieval, which try to extract the image contents based on texture, colour and shape. The proposed image retrieval method extracts global and local texture and colour information in two spatial and frequency domains. In this way, image is filtered by Gaussian filter, then co-occurrence matrices are made in different directions and the statistical features are extracted. The purpose of this phase is to extract noise-resistant local textures. Then the quantised histogram is produced to extract global colour information in the spatial domain. Also, Gabor filter banks are used to extract local texture features in the frequency domain. After concatenating the extracted features and using the normalised Euclidean criterion, retrieval is performed.

Findings

The performance of the proposed method is evaluated based on the precision, recall and run time measures on the Simplicity database. It is compared with many efficient methods of this field. The comparison results showed that the proposed method provides higher precision than many existing methods.

Originality/value

The comparison results showed that the proposed method provides higher precision than many existing methods. Rotation invariant, scale invariant and low sensitivity to noise are some advantages of the proposed method. The run time of the proposed method is within the usual time frame of algorithms in this domain, which indicates that the proposed method can be used online.

Details

The Electronic Library , vol. 37 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 13 May 2021

Chanattra Ammatmanee and Lu Gan

Due to the worldwide growth of digital image sharing and the maturity of the tourism industry, the vast and growing collections of digital images have become a challenge for those…

Abstract

Purpose

Due to the worldwide growth of digital image sharing and the maturity of the tourism industry, the vast and growing collections of digital images have become a challenge for those who use and/or manage these image data across tourism settings. To overcome the image indexing task with less labour cost and improve the image retrieval task with less human errors, the content-based image retrieval (CBIR) technique has been investigated for the tourism domain particularly. This paper aims to review the relevant literature in the field to understand these previous works and identify research gaps for future directions.

Design/methodology/approach

A systematic and comprehensive review of CBIR studies in tourism from the year 2010 to 2019, focussing on journal articles and conference proceedings in reputable online databases, is conducted by taking a comparative approach to critically analyse and address the trends of each fundamental element in these research experiments.

Findings

Based on the review of the literature, the trends of CBIR studies in tourism is to improve image representation and retrieval by advancing existing feature extraction techniques, contributing novel techniques in the feature extraction process through fine-tuning fusion features and improving image query of CBIR systems. Co-authorship, tourist attraction sector and fusion image features have been in focus. Nonetheless, the number of studies in other tourism sectors and available image databases could be further explored.

Originality/value

The fact that no existing academic review of CBIR studies in tourism makes this paper a novel contribution.

Details

The Electronic Library , vol. 39 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 8 July 2010

Chris Town and Karl Harrison

Content‐based image retrieval (CBIR) technologies offer many advantages over purely text‐based image search. However, one of the drawbacks associated with CBIR is the increased…

Abstract

Purpose

Content‐based image retrieval (CBIR) technologies offer many advantages over purely text‐based image search. However, one of the drawbacks associated with CBIR is the increased computational cost arising from tasks such as image processing, feature extraction, image classification, and object detection and recognition. Consequently CBIR systems have suffered from a lack of scalability, which has greatly hampered their adoption for real‐world public and commercial image search. At the same time, paradigms for large‐scale heterogeneous distributed computing such as grid computing, cloud computing, and utility‐based computing are gaining traction as a way of providing more scalable and efficient solutions to large‐scale computing tasks.

Design/methodology/approach

This paper presents an approach in which a large distributed processing grid has been used to apply a range of CBIR methods to a substantial number of images. By massively distributing the required computational task across thousands of grid nodes, very high through‐put has been achieved at relatively low overheads.

Findings

This has allowed one to analyse and index about 25 million high resolution images thus far, while using just two servers for storage and job submission. The CBIR system was developed by Imense Ltd and is based on automated analysis and recognition of image content using a semantic ontology. It features a range of image‐processing and analysis modules, including image segmentation, region classification, scene analysis, object detection, and face recognition methods.

Originality/value

In the case of content‐based image analysis, the primary performance criterion is the overall through‐put achieved by the system in terms of the number of images that can be processed over a given time frame, irrespective of the time taken to process any given image. As such, grid processing has great potential for massively parallel content‐based image retrieval and other tasks with similar performance requirements.

Details

Aslib Proceedings, vol. 62 no. 4/5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 24 January 2024

Chung-Ming Lo

An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their…

51

Abstract

Purpose

An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their requirements using an image query. Nevertheless, determining whether the retrieval system can provide convenient operation and relevant retrieval results is challenging. A CBIR system based on deep learning features was proposed in this study to effectively search and navigate images in digital articles.

Design/methodology/approach

Convolutional neural networks (CNNs) were used as the feature extractors in the author's experiments. Using pretrained parameters, the training time and retrieval time were reduced. Different CNN features were extracted from the constructed image databases consisting of images taken from the National Palace Museum Journals Archive and were compared in the CBIR system.

Findings

DenseNet201 achieved the best performance, with a top-10 mAP of 89% and a query time of 0.14 s.

Practical implications

The CBIR homepage displayed image categories showing the content of the database and provided the default query images. After retrieval, the result showed the metadata of the retrieved images and links back to the original pages.

Originality/value

With the interface and retrieval demonstration, a novel image-based reading mode can be established via the CBIR and links to the original images and contextual descriptions.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 1 November 2005

Mohamed Hammami, Youssef Chahir and Liming Chen

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable…

Abstract

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable web content. In this paper, we investigate this problem through WebGuard, our automatic machine learning based pornographic website classification and filtering system. Facing the Internet more and more visual and multimedia as exemplified by pornographic websites, we focus here our attention on the use of skin color related visual content based analysis along with textual and structural content based analysis for improving pornographic website filtering. While the most commercial filtering products on the marketplace are mainly based on textual content‐based analysis such as indicative keywords detection or manually collected black list checking, the originality of our work resides on the addition of structural and visual content‐based analysis to the classical textual content‐based analysis along with several major‐data mining techniques for learning and classifying. Experimented on a testbed of 400 websites including 200 adult sites and 200 non pornographic ones, WebGuard, our Web filtering engine scored a 96.1% classification accuracy rate when only textual and structural content based analysis are used, and 97.4% classification accuracy rate when skin color related visual content based analysis is driven in addition. Further experiments on a black list of 12 311 adult websites manually collected and classified by the French Ministry of Education showed that WebGuard scored 87.82% classification accuracy rate when using only textual and structural content‐based analysis, and 95.62% classification accuracy rate when the visual content‐based analysis is driven in addition. The basic framework of WebGuard can apply to other categorization problems of websites which combine, as most of them do today, textual and visual content.

Details

International Journal of Web Information Systems, vol. 1 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 June 2012

Amir H. Meghdadi and James F. Peters

The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space‐based image similarity measures and…

Abstract

Purpose

The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space‐based image similarity measures and its application in content‐based image classification and retrieval.

Design/methodology/approach

The proposed method in this paper is based on a set‐theoretic approach, where an image is viewed as a set of local visual elements. The method also includes a tolerance relation that detects the similarity between pairs of elements, if the difference between corresponding feature vectors is less than a threshold 2 (0,1).

Findings

It is shown that tolerance space‐based methods can be successfully used in a complete content‐based image retrieval (CBIR) system. Also, it is shown that perceptual tolerance neighbourhoods can replace tolerance classes in CBIR, resulting in more accuracy and less computations.

Originality/value

The main contribution of this paper is the introduction of perceptual tolerance neighbourhoods instead of tolerance classes in a new form of the Henry‐Peters tolerance‐based nearness measure (tNM) and a new neighbourhood‐based tolerance‐covering nearness measure (tcNM). Moreover, this paper presents a side – by – side comparison of the tolerance space based methods with other published methods on a test dataset of images.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 March 2014

Elaine Menard and Margaret Smithglass

The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The…

1172

Abstract

Purpose

The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The main objective of this extensive exploration was to identify the characteristics and functionalities of existing search interfaces and similar tools available for image retrieval.

Design/methodology/approach

An examination of 159 resources that offer image retrieval was carried out. First, general search functionalities offered by content-based image retrieval systems and text-based systems are described. Second, image retrieval in a multilingual context is explored. Finally, the search functionalities provided by four types of organisations (libraries, museums, image search engines and stock photography databases) are investigated.

Findings

The analysis of functionalities offered by online image resources revealed a very high degree of consistency within the types of resources examined. The resources found to be the most navigable and interesting to use were those built with standardised vocabularies combined with a clear, compact and efficient user interface. The analysis also highlights that many search engines are equipped with multiple language support features. A translation device, however, is implemented in only a few search engines.

Originality/value

The examination of best practices for image retrieval and the analysis of the real users' expectations, which will be obtained in the next phase of the research project, constitute the foundation upon which the search interface model that the authors propose to develop is based. It also provides valuable suggestions and guidelines for search engine researchers, designers and developers.

Article
Publication date: 26 January 2022

K. Venkataravana Nayak, J.S. Arunalatha, G.U. Vasanthakumar and K.R. Venugopal

The analysis of multimedia content is being applied in various real-time computer vision applications. In multimedia content, digital images constitute a significant part. The…

Abstract

Purpose

The analysis of multimedia content is being applied in various real-time computer vision applications. In multimedia content, digital images constitute a significant part. The representation of digital images interpreted by humans is subjective in nature and complex. Hence, searching for relevant images from the archives is difficult. Thus, electronic image analysis strategies have become effective tools in the process of image interpretation.

Design/methodology/approach

The traditional approach used is text-based, i.e. searching images using textual annotations. It consumes time in the manual process of annotating images and is difficult to reduce the dependency in textual annotations if the archive consists of large number of samples. Therefore, content-based image retrieval (CBIR) is adopted in which the high-level visuals of images are represented in terms of feature vectors, which contain numerical values. It is a commonly used approach to understand the content of query images in retrieving relevant images. Still, the performance is less than optimal due to the presence of semantic gap among the image content representation and human visual understanding perspective because of the image content photometric, geometric variations and occlusions in search environments.

Findings

The authors proposed an image retrieval framework to generate semantic response through the feature extraction with convolution network and optimization of extracted features using adaptive moment estimation algorithm towards enhancing the retrieval performance.

Originality/value

The proposed framework is tested on Corel-1k and ImageNet datasets resulted in an accuracy of 98 and 96%, respectively, compared to the state-of-the-art approaches.

Details

International Journal of Intelligent Unmanned Systems, vol. 11 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 31 July 2007

Peter G.B. Enser, Christine J. Sandom, Jonathon S. Hare and Paul H. Lewis

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval

2124

Abstract

Purpose

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques.

Design/methodology/approach

Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting‐edge automatic annotation techniques which seek to integrate the text‐based and content‐based image retrieval paradigms.

Findings

Evidence from the real‐world practice of image retrieval highlights the existence of a generic‐specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real‐query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic‐specific continuum.

Research limitations/implications

The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered.

Originality/value

The paper offers fresh insights into the challenge of migrating content‐based image retrieval from the laboratory to the operational environment, informed by newly‐assembled, comprehensive, live data.

Details

Journal of Documentation, vol. 63 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of 54