Search results

1 – 10 of over 9000
Article
Publication date: 29 March 2011

Xianqiang Zhu and Zhenfeng Shao

The purpose of this paper is to analyze the spectrum influence between radon transform and log‐polar transform when rotation and scale effect is eliminated. The average retrieval

1402

Abstract

Purpose

The purpose of this paper is to analyze the spectrum influence between radon transform and log‐polar transform when rotation and scale effect is eliminated. The average retrieval performance of wavelet and NSCT with different retrieval parameters is also studied.

Design/methodology/approach

The authors designed a multi‐scale and multi‐orientation texture transform spectrum, as well as rotation‐invariant feature vector and its measurement criteria. Then a new two‐level coarse‐to‐fine rotation and scale‐invariant texture retrieval algorithm based on no‐parameter statistic features was proposed. Experiments on VisTex texture database show that the algorithm proposed in this paper is appropriate for main orientation capturing and detail information description.

Findings

According to the experiments results, it was found that the combination of this two‐level progressive retrieval strategy and multi‐scale analysis method can effectively improve retrieval efficiency compared with traditional algorithms and ensure a high precision as well.

Originality/value

The paper presents a novel algorithm for rotation and scale‐invariant texture retrieval.

Details

Sensor Review, vol. 31 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 July 2010

Elaine Ménard

This paper seeks to examine image retrieval within two different contexts: a monolingual context where the language of the query is the same as the indexing language and a…

1259

Abstract

Purpose

This paper seeks to examine image retrieval within two different contexts: a monolingual context where the language of the query is the same as the indexing language and a multilingual context where the language of the query is different from the indexing language. The study also aims to compare two different approaches for the indexing of ordinary images representing common objects: traditional image indexing with the use of a controlled vocabulary and free image indexing using uncontrolled vocabulary.

Design/methodology/approach

This research uses three data collection methods. An analysis of the indexing terms was employed in order to examine the multiplicity of term types assigned to images. A simulation of the retrieval process involving a set of 30 images was performed with 60 participants. The quantification of the retrieval performance of each indexing approach was based on the usability measures, that is, effectiveness, efficiency and satisfaction of the user. Finally, a questionnaire was used to gather information on searcher satisfaction during and after the retrieval process.

Findings

The results of this research are twofold. The analysis of indexing terms associated with all the 3,950 images provides a comprehensive description of the characteristics of the four non‐combined indexing forms used for the study. Also, the retrieval simulation results offers information about the relative performance of the six indexing forms (combined and non‐combined) in terms of their effectiveness, efficiency (temporal and human) and the image searcher's satisfaction.

Originality/value

The findings of the study suggest that, in the near future, the information systems could benefit from allowing an increased coexistence of controlled vocabularies and uncontrolled vocabularies, resulting from collaborative image tagging, for example, and giving the users the possibility to dynamically participate in the image‐indexing process, in a more user‐centred way.

Details

Aslib Proceedings, vol. 62 no. 4/5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 11 March 2014

Elaine Menard and Margaret Smithglass

The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The…

1171

Abstract

Purpose

The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The main objective of this extensive exploration was to identify the characteristics and functionalities of existing search interfaces and similar tools available for image retrieval.

Design/methodology/approach

An examination of 159 resources that offer image retrieval was carried out. First, general search functionalities offered by content-based image retrieval systems and text-based systems are described. Second, image retrieval in a multilingual context is explored. Finally, the search functionalities provided by four types of organisations (libraries, museums, image search engines and stock photography databases) are investigated.

Findings

The analysis of functionalities offered by online image resources revealed a very high degree of consistency within the types of resources examined. The resources found to be the most navigable and interesting to use were those built with standardised vocabularies combined with a clear, compact and efficient user interface. The analysis also highlights that many search engines are equipped with multiple language support features. A translation device, however, is implemented in only a few search engines.

Originality/value

The examination of best practices for image retrieval and the analysis of the real users' expectations, which will be obtained in the next phase of the research project, constitute the foundation upon which the search interface model that the authors propose to develop is based. It also provides valuable suggestions and guidelines for search engine researchers, designers and developers.

Article
Publication date: 9 April 2019

Aabid Hussain, Sumeer Gul, Tariq Ahmad Shah and Sheikh Shueb

The purpose of this study is to explore the retrieval effectiveness of three image search engines (ISE) – Google Images, Yahoo Image Search and Picsearch in terms of their image

Abstract

Purpose

The purpose of this study is to explore the retrieval effectiveness of three image search engines (ISE) – Google Images, Yahoo Image Search and Picsearch in terms of their image retrieval capability. It is an effort to carry out a Cranfield experiment to know how efficient the commercial giants in the image search are and how efficient an image specific search engine is.

Design/methodology/approach

The keyword search feature of three ISEs – Google images, Yahoo Image Search and Picsearch – was exploited to make search with keyword captions of photos as query terms. Selected top ten images were used to act as a testbed for the study, as images were searched in accordance with features of the test bed. Features to be looked for included size (1200 × 800), format of images (JPEG/JPG) and the rank of the original image retrieved by ISEs under study. To gauge the overall retrieval effectiveness in terms of set standards, only first 50 result hits were checked. Retrieval efficiency of select ISEs were examined with respect to their precision and relative recall.

Findings

Yahoo Image Search outscores Google Images and Picsearch both in terms of precision and relative recall. Regarding other criteria – image size, image format and image rank in search results, Google Images is ahead of others.

Research limitations/implications

The study only takes into consideration basic image search feature, i.e. text-based search.

Practical implications

The study implies that image search engines should focus on relevant descriptions. The study evaluated text-based image retrieval facilities and thereby offers a choice to users to select best among the available ISEs for their use.

Originality/value

The study provides an insight into the effectiveness of the three ISEs. The study is one of the few studies to gauge retrieval effectiveness of ISEs. Study also produced key findings that are important for all ISE users and researchers and the Web image search industry. Findings of the study will also prove useful for search engine companies to improve their services.

Details

The Electronic Library , vol. 37 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 August 2014

Xing Wang, Zhenfeng Shao, Xiran Zhou and Jun Liu

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information…

Abstract

Purpose

This paper aims to present a novel feature design that is able to precisely describe salient objects in images. With the development of space survey, sensor and information acquisition technologies, more complex objects appear in high-resolution remote sensing images. Traditional visual features are no longer precise enough to describe the images.

Design/methodology/approach

A novel remote sensing image retrieval method based on VSP (visual salient point) features is proposed in this paper. A key point detector and descriptor are used to extract the critical features and their descriptors in remote sensing images. A visual attention model is adopted to calculate the saliency map of the images, separating the salient regions from the background in the images. The key points in the salient regions are then extracted and defined as VSPs. The VSP features can then be constructed. The similarity between images is measured using the VSP features.

Findings

According to the experiment results, compared with traditional visual features, VSP features are more precise and stable in representing diverse remote sensing images. The proposed method performs better than the traditional methods in image retrieval precision.

Originality/value

This paper presents a novel remote sensing image retrieval method based on VSP features.

Details

Sensor Review, vol. 34 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 13 September 2018

Yaghoub Norouzi and Hoda Homavandi

The purpose of this paper is to investigate image search and retrieval problems in selected search engines in relation to Persian writing style challenges.

Abstract

Purpose

The purpose of this paper is to investigate image search and retrieval problems in selected search engines in relation to Persian writing style challenges.

Design/methodology/approach

This study is an applied one, and to answer the questions the authors used an evaluative research method. The aim of the research is to explore the morphological and semantic problems of Persian language in connection with image search and retrieval among the three major and widespread search engines: Google, Yahoo and Bing. In order to collect the data, a checklist designed by the researcher was used and then the data were analyzed by descriptive and inferential statistics.

Findings

The results indicate that Google, Yahoo and Bing search engines do not pay enough attention to morphological and semantic features of Persian language in image search and retrieval. This research reveals that six groups of Persian language features include derived words, derived/compound words, Persian and Arabic Plural words, use of dotted T and the use of spoken language and polysemy, which are the major problems in this area. In addition, the results suggest that Google is the best search engine of all in terms of compatibility with Persian language features.

Originality/value

This study investigated some new aspects of the above-mentioned subject through combining morphological and semantic aspects of Persian language with image search and retrieval. Therefore, this study is an interdisciplinary research, the results of which would help both to offer some solutions and to carry out similar research on this subject area. This study will also fill a gap in research studies conducted so far in this area in Farsi language, especially in image search and retrieval. Moreover, findings of this study can help to bridge the gap between the user’s questions and search engines (systems) retrievals. In addition, the methodology of this paper provides a framework for further research on image search and retrieval in databases and search engines.

Details

Online Information Review, vol. 42 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 6 April 2012

Daniela Petrelli and Paul Clough

This paper aims to describe a study of the queries generated from a user experiment for cross‐language information retrieval (CLIR) from a historic image archive.

1308

Abstract

Purpose

This paper aims to describe a study of the queries generated from a user experiment for cross‐language information retrieval (CLIR) from a historic image archive.

Design/methodology/approach

A controlled lab‐based user study was carried out using a prototype Italian‐English image retrieval system. Participants were asked to carry out searches for 16 images provided to them, a known‐item search task. Italian speaking users generated 618 queries for a set of known‐item search tasks. User's interactions with the system were recorded and queries were analysed manually quantitatively and qualitatively. The queries generated by user's interaction with the system were analysed and the results used to suggest recommendations for the future development of cross‐language retrieval systems for digital image libraries.

Findings

Results highlight the diversity in requests for similar visual content and the weaknesses of machine translation for query translation. Through the manual translation of queries the authors show the benefits of using high‐quality translation resources. The results show the individual characteristics of users while performing known‐item searches and the overlap obtained between query terms and structured image captions, highlighting the use of user's search terms for objects within the foreground of an image.

Research limitations/implications

This research looks in depth into one case of interaction and one image repository. Despite this limitation, the discussed results are likely to be valid across other languages and image repositories.

Practical implications

To develop effective systems requires studying user's search behaviours, particularly in digital image libraries.

Originality/value

The growing quantity of digital visual material in digital libraries offers the potential to apply techniques from CLIR to provide cross‐language information access services. The value of this paper is in the provision of empirical evidence to support recommendations for effective cross‐language image retrieval system design.

Article
Publication date: 5 May 2023

Nguyen Thi Dinh, Nguyen Thi Uyen Nhi, Thanh Manh Le and Thanh The Van

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the…

Abstract

Purpose

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the KD-Tree structure was proposed.

Design/methodology/approach

A Random Forest structure was built to classify the objects on each image on the basis of the balanced multibranch KD-Tree structure. From that purpose, a KD-Tree structure was generated by the Random Forest to retrieve a set of similar images for an input image. A KD-Tree structure is applied to determine a relationship word at leaves to extract the relationship between objects on an input image. An input image content is described based on class names and relationships between objects.

Findings

A model of image retrieval and image content extraction was proposed based on the proposed theoretical basis; simultaneously, the experiment was built on multi-object image datasets including Microsoft COCO and Flickr with an average image retrieval precision of 0.9028 and 0.9163, respectively. The experimental results were compared with those of other works on the same image dataset to demonstrate the effectiveness of the proposed method.

Originality/value

A balanced multibranch KD-Tree structure was built to apply to relationship classification on the basis of the original KD-Tree structure. Then, KD-Tree Random Forest was built to improve the classifier performance and retrieve a set of similar images for an input image. Concurrently, the image content was described in the process of combining class names and relationships between objects.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 22 November 2011

Bailing Zhang

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications…

Abstract

Purpose

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications, facial image retrieval has received much attention in recent years. Similar to face recognition, finding appropriate image representation is a vital step for a successful facial image retrieval system. Recently, many efficient image feature descriptors have been proposed and some of them have been applied to face recognition. It is valuable to have comparative studies of different feature descriptors in facial image retrieval. And more importantly, how to fuse multiple features is a significant task which can have a substantial impact on the overall performance of the CBIR system. The purpose of this paper is to propose an efficient face image retrieval strategy.

Design/methodology/approach

In this paper, three different feature description methods have been investigated for facial image retrieval, including local binary pattern, curvelet transform and pyramid histogram of oriented gradient. The problem of large dimensionalities of the extracted features is addressed by employing a manifold learning method called spectral regression. A decision level fusion scheme fuzzy aggregation is applied by combining the distance metrics from the respective dimension reduced feature spaces.

Findings

Empirical evaluations on several face databases illustrate that dimension reduced features are more efficient for facial retrieval and the fuzzy aggregation fusion scheme can offer much enhanced performance. A 98 per cent rank 1 retrieval accuracy was obtained for the AR faces and 91 per cent for the FERET faces, showing that the method is robust against different variations like pose and occlusion.

Originality/value

The proposed method for facial image retrieval has a promising potential of designing a real‐world system for many applications, particularly in forensics and biometrics.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 4 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 6 November 2017

Yanti Idaya Aspura M.K. and Shahrul Azman Mohd Noah

The purpose of this study is to reduce the semantic distance by proposing a model for integrating indexes of textual and visual features via a multi-modality ontology and the use…

Abstract

Purpose

The purpose of this study is to reduce the semantic distance by proposing a model for integrating indexes of textual and visual features via a multi-modality ontology and the use of DBpedia to improve the comprehensiveness of the ontology to enhance semantic retrieval.

Design/methodology/approach

A multi-modality ontology-based approach was developed to integrate high-level concepts and low-level features, as well as integrate the ontology base with DBpedia to enrich the knowledge resource. A complete ontology model was also developed to represent the domain of sport news, with image caption keywords and image features. Precision and recall were used as metrics to evaluate the effectiveness of the multi-modality approach, and the outputs were compared with those obtained using a single-modality approach (i.e. textual ontology and visual ontology).

Findings

The results based on ten queries show a superior performance of the multi-modality ontology-based IMR system integrated with DBpedia in retrieving correct images in accordance with user queries. The system achieved 100 per cent precision for six of the queries and greater than 80 per cent precision for the other four queries. The text-based system only achieved 100 per cent precision for one query; all other queries yielded precision rates less than 0.500.

Research limitations/implications

This study only focused on BBC Sport News collection in the year 2009.

Practical implications

The paper includes implications for the development of ontology-based retrieval on image collection.

Originality value

This study demonstrates the strength of using a multi-modality ontology integrated with DBpedia for image retrieval to overcome the deficiencies of text-based and ontology-based systems. The result validates semantic text-based with multi-modality ontology and DBpedia as a useful model to reduce the semantic distance.

Details

The Electronic Library, vol. 35 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 10 of over 9000