Search results

1 – 10 of over 1000
Article
Publication date: 16 January 2017

Shervan Fekriershad and Farshad Tajeripour

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…

Abstract

Purpose

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.

Design/methodology/approach

One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.

Findings

The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.

Originality/value

In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.

Article
Publication date: 26 June 2009

Yih‐Chih Chiou, Chern‐Sheng Lin and Guan‐Zi Chen

The purpose of this paper is to present an automatic inspection method of colors and textures classification of paper and cloth objects.

Abstract

Purpose

The purpose of this paper is to present an automatic inspection method of colors and textures classification of paper and cloth objects.

Design/methodology/approach

In this system, the color image is transformed from RGB model to other suitable color model with one of the components being chosen as the gray‐level image for extracting textures. The gray‐level image is decomposed into four child images using wavelet transformation. Two child images capable of detecting variations along columns and rows are used to generate 0° and 90° co‐occurrence matrices, respectively. Some of the distinguishable texture features are derived from the two co‐occurrence matrixes. Finally, the test image is classified using neural networks. Nine color papers and eight color cloths are used to test the developed classification method.

Findings

The results show that recognition rate higher than 97.86 percent can be achieved if color and texture features are both used as the inputs to the networks.

Originality/value

The paper presents a new approach for testing materials. The multipurpose measurement application with unsophisticated and economical equipment can be confirmed in online inspection of papers and cloth manufacturing.

Details

Sensor Review, vol. 29 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 4 April 2016

Babar Khan, Fang Han, Zhijie Wang and Rana J. Masood

This paper aims to propose a biologically inspired processing architecture to recognize and classify fabrics with respect to the weave pattern (fabric texture) and yarn color

Abstract

Purpose

This paper aims to propose a biologically inspired processing architecture to recognize and classify fabrics with respect to the weave pattern (fabric texture) and yarn color (fabric color).

Design/methodology/approach

By using the fabric weave patterns image identification system, this study analyzed the fabric image based on the Hierarchical-MAX (HMAX) model of computer vision, to extract feature values related to texture of fabric. Red Green Blue (RGB) color descriptor based on opponent color channels simulating the single opponent and double opponent neuronal function of the brain is incorporated in to the texture descriptor to extract yarn color feature values. Finally, support vector machine classifier is used to train and test the algorithm.

Findings

This two-stage processing architecture can be used to construct a system based on computer vision to recognize fabric texture and to increase the system reliability and accuracy. Using this method, the stability and fault tolerance (invariance) was improved.

Originality/value

Traditionally, fabric texture recognition is performed manually by visual inspection. Recent studies have proposed automatic fabric texture identification based on computer vision. In the identification process, the fabric weave patterns are recognized by the warp and weft floats. However, due to the optical environments and the appearance differences of fabric and yarn, the stability and fault tolerance (invariance) of the computer vision method are yet to be improved. By using our method, the stability and fault tolerance (invariance) was improved.

Details

Assembly Automation, vol. 36 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 14 August 2017

Padmavati Shrivastava, K.K. Bhoyar and A.S. Zadgaonkar

The purpose of this paper is to build a classification system which mimics the perceptual ability of human vision, in gathering knowledge about the structure, content and the…

Abstract

Purpose

The purpose of this paper is to build a classification system which mimics the perceptual ability of human vision, in gathering knowledge about the structure, content and the surrounding environment of a real-world natural scene, at a quick glance accurately. This paper proposes a set of novel features to determine the gist of a given scene based on dominant color, dominant direction, openness and roughness features.

Design/methodology/approach

The classification system is designed at two different levels. At the first level, a set of low level features are extracted for each semantic feature. At the second level the extracted features are subjected to the process of feature evaluation, based on inter-class and intra-class distances. The most discriminating features are retained and used for training the support vector machine (SVM) classifier for two different data sets.

Findings

Accuracy of the proposed system has been evaluated on two data sets: the well-known Oliva-Torralba data set and the customized image data set comprising of high-resolution images of natural landscapes. The experimentation on these two data sets with the proposed novel feature set and SVM classifier has provided 92.68 percent average classification accuracy, using ten-fold cross validation approach. The set of proposed features efficiently represent visual information and are therefore capable of narrowing the semantic gap between low-level image representation and high-level human perception.

Originality/value

The method presented in this paper represents a new approach for extracting low-level features of reduced dimensionality that is able to model human perception for the task of scene classification. The methods of mapping primitive features to high-level features are intuitive to the user and are capable of reducing the semantic gap. The proposed feature evaluation technique is general and can be applied across any domain.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 June 2009

Francisco J. Veredas, Héctor Mesa and Laura Morente

Pressure ulcer is a clinical pathology of localized damage to the skin and underlying tissue caused by pressure, shear, and friction. Diagnosis, treatment and care of pressure…

Abstract

Purpose

Pressure ulcer is a clinical pathology of localized damage to the skin and underlying tissue caused by pressure, shear, and friction. Diagnosis, treatment and care of pressure ulcers involve high costs for sanitary systems. Accurate wound evaluation is a critical task to optimize the efficacy of treatments and health‐care. Clinicians evaluate the pressure ulcers by visual inspection of the damaged tissues, which is an imprecise manner of assessing the wound state. Current computer vision approaches do not offer a global solution to this particular problem. The purpose of this paper is to use a hybrid learning approach based on neural and Bayesian networks to design a computational system to automatic tissue identification in wound images.

Design/methodology/approach

A mean shift procedure and a region‐growing strategy are implemented for effective region segmentation. Color and texture features are extracted from these segmented regions. A set of k multi‐layer perceptrons is trained with inputs consisting of color and texture patterns, and outputs consisting of categorical tissue classes determined by clinical experts. This training procedure is driven by a k‐fold cross‐validation method. Finally, a Bayesian committee machine is formed by training a Bayesian network to combine the classifications of the k neural networks (NNs).

Findings

The authors outcomes show high efficiency rates from a two‐stage cascade approach to tissue identification. Giving a non‐homogeneous distribution of pattern classes, this hybrid approach has shown an additional advantage of increasing the classification efficiency when classifying patterns with relative low frequencies.

Practical implications

The methodology and results presented in this paper could have important implications to the field of clinical pressure ulcer evaluation and diagnosis.

Originality/value

The novelty associated with this work is the use of a hybrid approach consisting of NNs and Bayesian classifiers which are combined to increase the performance of a pattern recognition task applied to the real clinical problem of tissue detection under non‐controlled illumination conditions.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 16 August 2019

Neda Tadi Bani and Shervan Fekri-Ershad

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is…

Abstract

Purpose

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is proposed based on a combination of texture and colour information. The main purpose of this paper is to propose a new content based image retrieval approach using combination of color and texture information in spatial and transform domains jointly.

Design/methodology/approach

Various methods are provided for image retrieval, which try to extract the image contents based on texture, colour and shape. The proposed image retrieval method extracts global and local texture and colour information in two spatial and frequency domains. In this way, image is filtered by Gaussian filter, then co-occurrence matrices are made in different directions and the statistical features are extracted. The purpose of this phase is to extract noise-resistant local textures. Then the quantised histogram is produced to extract global colour information in the spatial domain. Also, Gabor filter banks are used to extract local texture features in the frequency domain. After concatenating the extracted features and using the normalised Euclidean criterion, retrieval is performed.

Findings

The performance of the proposed method is evaluated based on the precision, recall and run time measures on the Simplicity database. It is compared with many efficient methods of this field. The comparison results showed that the proposed method provides higher precision than many existing methods.

Originality/value

The comparison results showed that the proposed method provides higher precision than many existing methods. Rotation invariant, scale invariant and low sensitivity to noise are some advantages of the proposed method. The run time of the proposed method is within the usual time frame of algorithms in this domain, which indicates that the proposed method can be used online.

Details

The Electronic Library , vol. 37 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 14 March 2016

Rui Zhang and Binjie Xin

The purpose of this paper is introducing the image processing technology used for fabric analysis, which has the advantages of objective, digital and quick response.

Abstract

Purpose

The purpose of this paper is introducing the image processing technology used for fabric analysis, which has the advantages of objective, digital and quick response.

Design/methodology/approach

This paper briefly describes the key process and module of some typical automatic recognition systems for fabric analysis presented by previous researchers; the related methods and algorithms used for the texture and pattern identification are also introduced.

Findings

Compared with the traditional subjective method, the image processing technology method has been proved to be rapid, accurate and reliable for quality control.

Originality/value

The future trends and limitations in the field of weave pattern recognition for woven fabrics have been summarized at the end of this paper.

Details

Research Journal of Textile and Apparel, vol. 20 no. 1
Type: Research Article
ISSN: 1560-6074

Keywords

Article
Publication date: 17 July 2020

Pulla Rao Chennamsetty, Guruvareddy Avula and Ramarao Chunduri buchhi

The purpose of the research work is to detect camouflaged objects in autonomous systems of military applications and civilian applications such as detecting insects in paddy…

Abstract

Purpose

The purpose of the research work is to detect camouflaged objects in autonomous systems of military applications and civilian applications such as detecting insects in paddy fields, identifying duplicate products in different texture environments.

Design/methodology/approach

Camouflaged objects detection is performed by smoothing texture with nonlinear models and characterizing with statistical methods to detect the objects.

Findings

There are few challenges in existing camouflaged objects detection due to the complexities involved in the detection process. This work proposes a constructive approach with texture statistical characterization for camouflage detection. The proposed technique is found to be better than existing methods while assessing its performance using precision and recall.

Research limitations/implications

Even though there is lot of research work carried, there are few challenges for autonomous systems in camouflage detection due to the complexities involved in the detection process such as texture modeling and dynamic background problems and environment conditions for autonomous system.

Practical implications

Camouflage detection finds potential applications in security systems, surveillance, military and autonomous systems. The proposed work is implemented in different environments for camouflage detection.

Social implications

Social problems such as image acquisition environment, time of day, desert, forest and grass fields of paddy.

Originality/value

The proposed method detects camouflaged objects in autonomous systems where it is applied for images of different kinds. It is found to be effective on images recorded in battlefield and challenging environments.

Details

International Journal of Intelligent Unmanned Systems, vol. 9 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 27 November 2009

A. Vadivel, Shamik Sural and A.K. Majumdar

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based…

Abstract

Purpose

The main obstacle in realising semantic‐based image retrieval from the web is that it is difficult to capture semantic description of an image in low‐level features. Text‐based keywords can be generated from web documents to capture semantic information for narrowing down the search space. The combination of keywords and various low‐level features effectively increases the retrieval precision. The purpose of this paper is to propose a dynamic approach for integrating keywords and low‐level features to take advantage of their complementary strengths.

Design/methodology/approach

Image semantics are described using both low‐level features and keywords. The keywords are constructed from the text located in the vicinity of images embedded in HTML documents. Various low‐level features such as colour histograms, texture and composite colourtexture features are extracted for supplementing keywords.

Findings

The retrieval performance is better than that of various recently proposed techniques. The experimental results show that the integrated approach has better retrieval performance than both the text‐based and the content‐based techniques.

Research limitations/implications

The features of images used for capturing the semantics may not always describe the content.

Practical implications

The indexing mechanism for dynamically growing features is challenging while practically implementing the system.

Originality/value

A survey of image retrieval systems for searching images available on the internet found that no internet search engine can handle both low‐level features and keywords as queries for retrieving images from WWW so this is the first of its kind.

Details

Online Information Review, vol. 33 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 21 November 2008

Chun‐Nan Lin, Chih‐Fong Tsai and Jinsheng Roan

Because of the popularity of digital cameras, the number of personal photographs is increasing rapidly. In general, people manage their photos by date, subject, participants, etc…

Abstract

Purpose

Because of the popularity of digital cameras, the number of personal photographs is increasing rapidly. In general, people manage their photos by date, subject, participants, etc. for future browsing and searching. However, it is difficult and/or takes time to retrieve desired photos from a large number of photographs based on the general personal photo management strategy. In this paper the authors aim to propose a systematic solution to effectively organising and browsing personal photos.

Design/methodology/approach

In their system the authors apply the concept of content‐based image retrieval (CBIR) to automatically extract visual image features of personal photos. Then three well‐known clustering techniques – k‐means, self‐organising maps and fuzzy c‐means – are used to group personal photos. Finally, the clustering results are evaluated by human subjects in terms of retrieval effectiveness and efficiency.

Findings

Experimental results based on the dataset of 1,000 personal photos show that the k‐means clustering method outperforms self‐organising maps and fuzzy c‐means. That is, 12 subjects out of 30 preferred the clustering results of k‐means. In particular, most subjects agreed that larger numbers of clusters (e.g. 15 to 20) enabled more effective browsing of personal photos. For the efficiency evaluation, the clustering results using k‐means allowed subjects to search for relevant images in the least amount of time.

Originality/value

CBIR is applied in many areas, but very few related works focus on personal photo browsing and retrieval. This paper examines the applicability of using CBIR and clustering techniques for browsing personal photos. In addition, the evaluation based on the effectiveness and efficiency strategies ensures the reliability of our findings.

Details

Online Information Review, vol. 32 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 1000