Search results

1 – 10 of 531
Article
Publication date: 28 September 2021

Zahra Alvandi Poor, Mahdieh Mirzabeigi and Majid Nabavi

The purpose of this study aims to identify the impact of verbal-visual cognitive styles on the level of satisfaction and behavior in the textual and content search of Google Images

Abstract

Purpose

The purpose of this study aims to identify the impact of verbal-visual cognitive styles on the level of satisfaction and behavior in the textual and content search of Google Images.

Design/methodology/approach

“Riding” cognitive style test and satisfaction questionnaire were used as data collection tools. Also, to collect data related to the image search behavior, the subjects’ transaction files were recorded using Camtasia software and then the files observed and reviewed. The research sample was 90 postgraduate students of Shiraz University.

Findings

The results showed that cognitive styles in interaction with the text-based and content-based search system of “Google Images” affected user’s satisfaction. Text-based image retrieval, in which vocabulary-based information needs were expressed, was more compatible with the verbal cognitive style and resulted in greater satisfaction. In contrast, in content-based image retrieval, where it was possible to express information needs in the form of images, users were more satisfied with the visual cognitive style. Verbal users performed more positively in text-based search and visual users in content-based search.

Originality/value

Considering the research gap, which has identified the performance of visual text-based and content-based systems in terms of satisfaction and cognitive style search behavior, the present study could be considered a small effort to promote science.

Details

Aslib Journal of Information Management, vol. 74 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 1 November 2005

Mohamed Hammami, Youssef Chahir and Liming Chen

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable…

Abstract

Along with the ever growingWeb is the proliferation of objectionable content, such as sex, violence, racism, etc. We need efficient tools for classifying and filtering undesirable web content. In this paper, we investigate this problem through WebGuard, our automatic machine learning based pornographic website classification and filtering system. Facing the Internet more and more visual and multimedia as exemplified by pornographic websites, we focus here our attention on the use of skin color related visual content based analysis along with textual and structural content based analysis for improving pornographic website filtering. While the most commercial filtering products on the marketplace are mainly based on textual content‐based analysis such as indicative keywords detection or manually collected black list checking, the originality of our work resides on the addition of structural and visual content‐based analysis to the classical textual content‐based analysis along with several major‐data mining techniques for learning and classifying. Experimented on a testbed of 400 websites including 200 adult sites and 200 non pornographic ones, WebGuard, our Web filtering engine scored a 96.1% classification accuracy rate when only textual and structural content based analysis are used, and 97.4% classification accuracy rate when skin color related visual content based analysis is driven in addition. Further experiments on a black list of 12 311 adult websites manually collected and classified by the French Ministry of Education showed that WebGuard scored 87.82% classification accuracy rate when using only textual and structural content‐based analysis, and 95.62% classification accuracy rate when the visual content‐based analysis is driven in addition. The basic framework of WebGuard can apply to other categorization problems of websites which combine, as most of them do today, textual and visual content.

Details

International Journal of Web Information Systems, vol. 1 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 14 August 2017

Sudeep Thepade, Rik Das and Saurav Ghosh

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image

Abstract

Purpose

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques.

Design/methodology/approach

Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work.

Findings

The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose.

Originality/value

To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 August 2019

Neda Tadi Bani and Shervan Fekri-Ershad

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is…

Abstract

Purpose

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is proposed based on a combination of texture and colour information. The main purpose of this paper is to propose a new content based image retrieval approach using combination of color and texture information in spatial and transform domains jointly.

Design/methodology/approach

Various methods are provided for image retrieval, which try to extract the image contents based on texture, colour and shape. The proposed image retrieval method extracts global and local texture and colour information in two spatial and frequency domains. In this way, image is filtered by Gaussian filter, then co-occurrence matrices are made in different directions and the statistical features are extracted. The purpose of this phase is to extract noise-resistant local textures. Then the quantised histogram is produced to extract global colour information in the spatial domain. Also, Gabor filter banks are used to extract local texture features in the frequency domain. After concatenating the extracted features and using the normalised Euclidean criterion, retrieval is performed.

Findings

The performance of the proposed method is evaluated based on the precision, recall and run time measures on the Simplicity database. It is compared with many efficient methods of this field. The comparison results showed that the proposed method provides higher precision than many existing methods.

Originality/value

The comparison results showed that the proposed method provides higher precision than many existing methods. Rotation invariant, scale invariant and low sensitivity to noise are some advantages of the proposed method. The run time of the proposed method is within the usual time frame of algorithms in this domain, which indicates that the proposed method can be used online.

Details

The Electronic Library , vol. 37 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 8 July 2010

Chris Town and Karl Harrison

Content‐based image retrieval (CBIR) technologies offer many advantages over purely text‐based image search. However, one of the drawbacks associated with CBIR is the increased…

Abstract

Purpose

Content‐based image retrieval (CBIR) technologies offer many advantages over purely text‐based image search. However, one of the drawbacks associated with CBIR is the increased computational cost arising from tasks such as image processing, feature extraction, image classification, and object detection and recognition. Consequently CBIR systems have suffered from a lack of scalability, which has greatly hampered their adoption for real‐world public and commercial image search. At the same time, paradigms for large‐scale heterogeneous distributed computing such as grid computing, cloud computing, and utility‐based computing are gaining traction as a way of providing more scalable and efficient solutions to large‐scale computing tasks.

Design/methodology/approach

This paper presents an approach in which a large distributed processing grid has been used to apply a range of CBIR methods to a substantial number of images. By massively distributing the required computational task across thousands of grid nodes, very high through‐put has been achieved at relatively low overheads.

Findings

This has allowed one to analyse and index about 25 million high resolution images thus far, while using just two servers for storage and job submission. The CBIR system was developed by Imense Ltd and is based on automated analysis and recognition of image content using a semantic ontology. It features a range of image‐processing and analysis modules, including image segmentation, region classification, scene analysis, object detection, and face recognition methods.

Originality/value

In the case of content‐based image analysis, the primary performance criterion is the overall through‐put achieved by the system in terms of the number of images that can be processed over a given time frame, irrespective of the time taken to process any given image. As such, grid processing has great potential for massively parallel content‐based image retrieval and other tasks with similar performance requirements.

Details

Aslib Proceedings, vol. 62 no. 4/5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 13 May 2021

Chanattra Ammatmanee and Lu Gan

Due to the worldwide growth of digital image sharing and the maturity of the tourism industry, the vast and growing collections of digital images have become a challenge for those…

Abstract

Purpose

Due to the worldwide growth of digital image sharing and the maturity of the tourism industry, the vast and growing collections of digital images have become a challenge for those who use and/or manage these image data across tourism settings. To overcome the image indexing task with less labour cost and improve the image retrieval task with less human errors, the content-based image retrieval (CBIR) technique has been investigated for the tourism domain particularly. This paper aims to review the relevant literature in the field to understand these previous works and identify research gaps for future directions.

Design/methodology/approach

A systematic and comprehensive review of CBIR studies in tourism from the year 2010 to 2019, focussing on journal articles and conference proceedings in reputable online databases, is conducted by taking a comparative approach to critically analyse and address the trends of each fundamental element in these research experiments.

Findings

Based on the review of the literature, the trends of CBIR studies in tourism is to improve image representation and retrieval by advancing existing feature extraction techniques, contributing novel techniques in the feature extraction process through fine-tuning fusion features and improving image query of CBIR systems. Co-authorship, tourist attraction sector and fusion image features have been in focus. Nonetheless, the number of studies in other tourism sectors and available image databases could be further explored.

Originality/value

The fact that no existing academic review of CBIR studies in tourism makes this paper a novel contribution.

Details

The Electronic Library , vol. 39 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 15 March 2018

Fatemeh Alyari and Nima Jafari Navimipour

This paper aims to identify, evaluate and integrate the findings of all relevant and high-quality individual studies addressing one or more research questions about recommender…

2462

Abstract

Purpose

This paper aims to identify, evaluate and integrate the findings of all relevant and high-quality individual studies addressing one or more research questions about recommender systems and performing a comprehensive study of empirical research on recommender systems that have been divided into five main categories. To achieve this aim, the authors use systematic literature review (SLR) as a powerful method to collect and critically analyze the research papers. Also, the authors discuss the selected recommender systems and its main techniques, as well as their benefits and drawbacks in general.

Design/methodology/approach

In this paper, the SLR method is utilized with the aim of identifying, evaluating and integrating the findings of all relevant and high-quality individual studies addressing one or more research questions about recommender systems and performing a comprehensive study of empirical research on recommender systems that have been divided into five main categories. Also, the authors discussed recommender system and its techniques in general without a specific domain.

Findings

The major developments in categories of recommender systems are reviewed, and new challenges are outlined. Furthermore, insights on the identification of open issues and guidelines for future research are provided. Also, this paper presents the systematical analysis of the recommender system literature from 2005. The authors identified 536 papers, which were reduced to 51 primary studies through the paper selection process.

Originality/value

This survey will directly support academics and practical professionals in their understanding of developments in recommender systems and its techniques.

Details

Kybernetes, vol. 47 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 June 2012

Amir H. Meghdadi and James F. Peters

The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space‐based image similarity measures and…

Abstract

Purpose

The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space‐based image similarity measures and its application in content‐based image classification and retrieval.

Design/methodology/approach

The proposed method in this paper is based on a set‐theoretic approach, where an image is viewed as a set of local visual elements. The method also includes a tolerance relation that detects the similarity between pairs of elements, if the difference between corresponding feature vectors is less than a threshold 2 (0,1).

Findings

It is shown that tolerance space‐based methods can be successfully used in a complete content‐based image retrieval (CBIR) system. Also, it is shown that perceptual tolerance neighbourhoods can replace tolerance classes in CBIR, resulting in more accuracy and less computations.

Originality/value

The main contribution of this paper is the introduction of perceptual tolerance neighbourhoods instead of tolerance classes in a new form of the Henry‐Peters tolerance‐based nearness measure (tNM) and a new neighbourhood‐based tolerance‐covering nearness measure (tcNM). Moreover, this paper presents a side – by – side comparison of the tolerance space based methods with other published methods on a test dataset of images.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 31 July 2007

Peter G.B. Enser, Christine J. Sandom, Jonathon S. Hare and Paul H. Lewis

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval

2123

Abstract

Purpose

To provide a better‐informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques.

Design/methodology/approach

Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting‐edge automatic annotation techniques which seek to integrate the text‐based and content‐based image retrieval paradigms.

Findings

Evidence from the real‐world practice of image retrieval highlights the existence of a generic‐specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real‐query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic‐specific continuum.

Research limitations/implications

The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered.

Originality/value

The paper offers fresh insights into the challenge of migrating content‐based image retrieval from the laboratory to the operational environment, informed by newly‐assembled, comprehensive, live data.

Details

Journal of Documentation, vol. 63 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of 531