Search results

1 – 10 of 22
Article
Publication date: 22 November 2011

Bailing Zhang

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications…

Abstract

Purpose

Content‐based image retrieval (CBIR) is an important research area for automatically retrieving images of user interest from a large database. Due to many potential applications, facial image retrieval has received much attention in recent years. Similar to face recognition, finding appropriate image representation is a vital step for a successful facial image retrieval system. Recently, many efficient image feature descriptors have been proposed and some of them have been applied to face recognition. It is valuable to have comparative studies of different feature descriptors in facial image retrieval. And more importantly, how to fuse multiple features is a significant task which can have a substantial impact on the overall performance of the CBIR system. The purpose of this paper is to propose an efficient face image retrieval strategy.

Design/methodology/approach

In this paper, three different feature description methods have been investigated for facial image retrieval, including local binary pattern, curvelet transform and pyramid histogram of oriented gradient. The problem of large dimensionalities of the extracted features is addressed by employing a manifold learning method called spectral regression. A decision level fusion scheme fuzzy aggregation is applied by combining the distance metrics from the respective dimension reduced feature spaces.

Findings

Empirical evaluations on several face databases illustrate that dimension reduced features are more efficient for facial retrieval and the fuzzy aggregation fusion scheme can offer much enhanced performance. A 98 per cent rank 1 retrieval accuracy was obtained for the AR faces and 91 per cent for the FERET faces, showing that the method is robust against different variations like pose and occlusion.

Originality/value

The proposed method for facial image retrieval has a promising potential of designing a real‐world system for many applications, particularly in forensics and biometrics.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 4 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 24 August 2021

Rajakumar Krishnan, Arunkumar Thangavelu, P. Prabhavathy, Devulapalli Sudheer, Deepak Putrevu and Arundhati Misra

Extracting suitable features to represent an image based on its content is a very tedious task. Especially in remote sensing we have high-resolution images with a variety of…

Abstract

Purpose

Extracting suitable features to represent an image based on its content is a very tedious task. Especially in remote sensing we have high-resolution images with a variety of objects on the Earth's surface. Mahalanobis distance metric is used to measure the similarity between query and database images. The low distance obtained image is indexed at the top as high relevant information to the query.

Design/methodology/approach

This paper aims to develop an automatic feature extraction system for remote sensing image data. Haralick texture features based on Contourlet transform are fused with statistical features extracted from the QuadTree (QT) decomposition are developed as feature set to represent the input data. The extracted features will retrieve similar images from the large image datasets using an image-based query through the web-based user interface.

Findings

The developed retrieval system performance has been analyzed using precision and recall and F1 score. The proposed feature vector gives better performance with 0.69 precision for the top 50 relevant retrieved results over other existing multiscale-based feature extraction methods.

Originality/value

The main contribution of this paper is developing a texture feature vector in a multiscale domain by combining the Haralick texture properties in the Contourlet domain and Statistical features using QT decomposition. The features required to represent the image is 207 which is very less dimension compare to other texture methods. The performance shows superior than the other state of art methods.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 12 July 2011

M.A. Latif, J.C. Chedjou and K. Kyamakya

An image contrast enhancement is one of the most important low‐level image pre‐processing tasks required by the vision‐based advanced driver assistance systems (ADAS). This paper…

Abstract

Purpose

An image contrast enhancement is one of the most important low‐level image pre‐processing tasks required by the vision‐based advanced driver assistance systems (ADAS). This paper seeks to address this important issue keeping the real time constraints in focus, which is especially vital for the ADAS.

Design/methodology/approach

The approach is based on a paradigm of nonlinear‐coupled oscillators in image processing. Each layer of the colored images is treated as an independent grayscale image and is processed separately by the paradigm. The pixels with the lowest and the highest gray levels are chosen and their difference is enhanced to span all the gray levels in an image over the entire gray level range, i.e. [0 1]. This operation enhances the contrast in each layer and the enhanced layers are finally combined to produce a color image of a much improved quality.

Findings

The approach performs robust contrast enhancement as compared to other approaches available in the relevant literature. Generally, other approaches do need a new setting of parameters for every new image to perform its task, i.e. contrast enhancement. These approaches are not useful for real‐time applications such as ADAS. Whereas, the proposed approach presented in this paper performs contrast enhancement for different images under the same setting of parameters, hence giving rise to the robustness in the system. The unique setting of parameters is derived through a bifurcation analysis explained in the paper.

Originality/value

The proposed approach is novel in different aspects. First, the proposed paradigm comprises of coupled differential equations, and therefore, offers a continuous model as opposed to other approaches in the relevant literature. This continuity in the model is an inherent feature of the proposed approach, which could be useful in realizing real‐time image processing with an analog implemented circuit of the approach. Furthermore, a novel framework combining coupled oscillatory paradigm and cellular neural network is also possible to achieve ultra‐fast solution in image contrast enhancement.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 30 no. 4
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 11 June 2018

Deepika Kishor Nagthane and Archana M. Rajurkar

One of the main reasons for increase in mortality rate in woman is breast cancer. Accurate early detection of breast cancer seems to be the only solution for diagnosis. In the…

Abstract

Purpose

One of the main reasons for increase in mortality rate in woman is breast cancer. Accurate early detection of breast cancer seems to be the only solution for diagnosis. In the field of breast cancer research, many new computer-aided diagnosis systems have been developed to reduce the diagnostic test false positives because of the subtle appearance of breast cancer tissues. The purpose of this study is to develop the diagnosis technique for breast cancer using LCFS and TreeHiCARe classifier model.

Design/methodology/approach

The proposed diagnosis methodology initiates with the pre-processing procedure. Subsequently, feature extraction is performed. In feature extraction, the image features which preserve the characteristics of the breast tissues are extracted. Consequently, feature selection is performed by the proposed least-mean-square (LMS)-Cuckoo search feature selection (LCFS) algorithm. The feature selection from the vast range of the features extracted from the images is performed with the help of the optimal cut point provided by the LCS algorithm. Then, the image transaction database table is developed using the keywords of the training images and feature vectors. The transaction resembles the itemset and the association rules are generated from the transaction representation based on a priori algorithm with high conviction ratio and lift. After association rule generation, the proposed TreeHiCARe classifier model emanates in the diagnosis methodology. In TreeHICARe classifier, a new feature index is developed for the selection of a central feature for the decision tree centered on which the classification of images into normal or abnormal is performed.

Findings

The performance of the proposed method is validated over existing works using accuracy, sensitivity and specificity measures. The experimentation of proposed method on Mammographic Image Analysis Society database resulted in classification of normal and abnormal cancerous mammogram images with an accuracy of 0.8289, sensitivity of 0.9333 and specificity of 0.7273.

Originality/value

This paper proposes a new approach for the breast cancer diagnosis system by using mammogram images. The proposed method uses two new algorithms: LCFS and TreeHiCARe. LCFS is used to select optimal feature split points, and TreeHiCARe is the decision tree classifier model based on association rule agreements.

Details

Sensor Review, vol. 39 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 19 June 2017

Qi Wang, Pengcheng Zhang, Jianming Wang, Qingliang Chen, Zhijie Lian, Xiuyan Li, Yukuan Sun, Xiaojie Duan, Ziqiang Cui, Benyuan Sun and Huaxiang Wang

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the…

Abstract

Purpose

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction for EIT is a nonlinear problem. A generalized inverse operator is usually ill-posed and ill-conditioned. Therefore, the solutions for EIT are not unique and highly sensitive to the measurement noise.

Design/methodology/approach

This paper develops a novel image reconstruction algorithm for EIT based on patch-based sparse representation. The sparsifying dictionary optimization and image reconstruction are performed alternately. Two patch-based sparsity, namely, square-patch sparsity and column-patch sparsity, are discussed and compared with the global sparsity.

Findings

Both simulation and experimental results indicate that the patch based sparsity method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages.

Originality/value

EIT image is reconstructed based on patch-based sparse representation. Square-patch sparsity and column-patch sparsity are proposed and compared. Sparse dictionary optimization and image reconstruction are performed alternately. The new method tolerates a relatively high level of noise in measured voltages.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 13 March 2007

B. Pradhan, K. Sandeep, Shattri Mansor, Abdul Rahman Ramli and Abdul Rashid B. Mohamed Sharif

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs…

Abstract

Purpose

In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs it has become highly essential to reduce the number of triangles in order to save more storing space. Therefore, there is need to visualize terrains at different levels of detail, for example, a region of high interest should be in higher resolution than a region of low or no interest. Wavelet technology provides an efficient approach to achieve this. Using this technology, one can decompose a terrain data into hierarchy. On the other hand, the reduction of the number of triangles in subsequent levels should not be too small; otherwise leading to poor representation of terrain.

Design/methodology/approach

This paper proposes a new computational code (please see Appendix for the flow chart and pseudo code) for triangulated irregular network (TIN) using Delaunay triangulation methods. The algorithms have proved to be efficient tools in numerical methods such as finite element method and image processing. Further, second generation wavelet techniques popularly known as “lifting schemes” have been applied to compress the TIN data.

Findings

A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub‐triangles and the elevation step has been used to “modify” the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets.

Originality/value

A new algorithm for second generation wavelet compression has been proposed for TIN data compression. The quality of geographical surface representation after using proposed technique is compared with the original terrain. The results show that this method can be used for significant reduction of data set.

Details

Engineering Computations, vol. 24 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 June 2023

Liang Gong, Hang Dong, Xin Cheng, Zhenghui Ge and Liangchao Guo

The purpose of this study is to propose a new method for the end-to-end classification of steel surface defects.

Abstract

Purpose

The purpose of this study is to propose a new method for the end-to-end classification of steel surface defects.

Design/methodology/approach

This study proposes an AM-AoN-SNN algorithm, which combines an attention mechanism (AM) with an All-optical Neuron-based spiking neural network (AoN-SNN). The AM enhances network learning and extracts defective features, while the AoN-SNN predicts both the labels of the defects and the final labels of the images. Compared to the conventional Leaky-Integrated and Fire SNN, the AoN-SNN has improved the activation of neurons.

Findings

The experimental findings on Northeast University (NEU)-CLS demonstrate that the proposed neural network detection approach outperforms other methods. Furthermore, the network’s effectiveness was tested, and the results indicate that the proposed method can achieve high detection accuracy and strong anti-interference capabilities while maintaining a basic structure.

Originality/value

This study introduces a novel approach to classifying steel surface defects using a combination of a shallow AoN-SNN and a hybrid AM with different network architectures. The proposed method is the first study of SNN networks applied to this task.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 September 2016

Ziqiang Cui, Qi Wang, Qian Xue, Wenru Fan, Lingling Zhang, Zhang Cao, Benyuan Sun, Huaxiang Wang and Wuqiang Yang

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost…

1200

Abstract

Purpose

Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost, non-invasive and visualization features. There are two major difficulties in image reconstruction for ECT and ERT: the “soft-field”effect, and the ill-posedness of the inverse problem, which includes two problems: under-determined problem and the solution is not stable, i.e. is very sensitive to measurement errors and noise. This paper aims to summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide reference for further research and application.

Design/methodology/approach

In the past 10 years, various image reconstruction algorithms have been developed to deal with these problems, including in the field of industrial multi-phase flow measurement and biological medical diagnosis.

Findings

This paper reviews existing image reconstruction algorithms and the new algorithms proposed by the authors for electrical capacitance tomography and electrical resistance tomography in multi-phase flow measurement and biological medical diagnosis.

Originality/value

The authors systematically summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide valuable reference for practical applications.

Article
Publication date: 16 August 2019

Neda Tadi Bani and Shervan Fekri-Ershad

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is…

Abstract

Purpose

Large amount of data are stored in image format. Image retrieval from bulk databases has become a hot research topic. An alternative method for efficient image retrieval is proposed based on a combination of texture and colour information. The main purpose of this paper is to propose a new content based image retrieval approach using combination of color and texture information in spatial and transform domains jointly.

Design/methodology/approach

Various methods are provided for image retrieval, which try to extract the image contents based on texture, colour and shape. The proposed image retrieval method extracts global and local texture and colour information in two spatial and frequency domains. In this way, image is filtered by Gaussian filter, then co-occurrence matrices are made in different directions and the statistical features are extracted. The purpose of this phase is to extract noise-resistant local textures. Then the quantised histogram is produced to extract global colour information in the spatial domain. Also, Gabor filter banks are used to extract local texture features in the frequency domain. After concatenating the extracted features and using the normalised Euclidean criterion, retrieval is performed.

Findings

The performance of the proposed method is evaluated based on the precision, recall and run time measures on the Simplicity database. It is compared with many efficient methods of this field. The comparison results showed that the proposed method provides higher precision than many existing methods.

Originality/value

The comparison results showed that the proposed method provides higher precision than many existing methods. Rotation invariant, scale invariant and low sensitivity to noise are some advantages of the proposed method. The run time of the proposed method is within the usual time frame of algorithms in this domain, which indicates that the proposed method can be used online.

Details

The Electronic Library , vol. 37 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 May 2020

S. Veluchamy and L.R. Karlmarx

Biometric identification system has become emerging research field because of its wide applications in the fields of security. This study (multimodal system) aims to find more…

Abstract

Purpose

Biometric identification system has become emerging research field because of its wide applications in the fields of security. This study (multimodal system) aims to find more applications than the unimodal system because of their high user acceptance value, better recognition accuracy and low-cost sensors. The biometric identification using the finger knuckle and the palmprint finds more application than other features because of its unique features.

Design/methodology/approach

The proposed model performs the user authentication through the extracted features from both the palmprint and the finger knuckle images. The two major processes in the proposed system are feature extraction and classification. The proposed model extracts the features from the palmprint and the finger knuckle with the proposed HE-Co-HOG model after the pre-processing. The proposed HE-Co-HOG model finds the Palmprint HE-Co-HOG vector and the finger knuckle HE-Co-HOG vector. These features from both the palmprint and the finger knuckle are combined with the optimal weight score from the fractional firefly (FFF) algorithm. The layered k-SVM classifier classifies each person's identity from the fused vector.

Findings

Two standard data sets with the palmprint and the finger knuckle images were used for the simulation. The simulation results were analyzed in two ways. In the first method, the bin sizes of the HE-Co-HOG vector were varied for the various training of the data set. In the second method, the performance of the proposed model was compared with the existing models for the different training size of the data set. From the simulation results, the proposed model has achieved a maximum accuracy of 0.95 and the lowest false acceptance rate and false rejection rate with a value of 0.1.

Originality/value

In this paper, the multimodal biometric recognition system based on the proposed HE-Co-HOG with the k-SVM and the FFF is developed. The proposed model uses the palmprint and the finger knuckle images as the biometrics. The development of the proposed HE-Co-HOG vector is done by modifying the Co-HOG with the holoentropy weights.

Details

Sensor Review, vol. 40 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 22