Search results

1 – 10 of over 4000
To view the access options for this content please click here
Article
Publication date: 14 August 2017

Sudeep Thepade, Rik Das and Saurav Ghosh

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image…

Abstract

Purpose

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques.

Design/methodology/approach

Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work.

Findings

The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose.

Originality/value

To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 17 September 2019

Chérif Taouche and Hacene Belhadef

Palmprint recognition is a very interesting and promising area of research. Much work has already been done in this area, but much more needs to be done to make the…

Abstract

Purpose

Palmprint recognition is a very interesting and promising area of research. Much work has already been done in this area, but much more needs to be done to make the systems more efficient. In this paper, a multimodal biometrics system based on fusion of left and right palmprints of a person is proposed to overcome limitations of unimodal systems.

Design/methodology/approach

Features are extracted using some proposed multi-block local descriptors in addition to MBLBP. Fusion of extracted features is done at feature level by a simple concatenation of feature vectors. Then, feature selection is performed on the resulting global feature vector using evolutionary algorithms such as genetic algorithms and backtracking search algorithm for a comparison purpose. The benefits of such step selecting the relevant features are known in the literature, such as increasing the recognition accuracy and reducing the feature set size, which results in runtime saving. In matching step, Chi-square similarity measure is used.

Findings

The resulting feature vector length representing a person is compact and the runtime is reduced.

Originality/value

Intensive experiments were done on the publicly available IITD database. Experimental results show a recognition accuracy of 99.17 which prove the effectiveness and robustness of the proposed multimodal biometrics system than other unimodal and multimodal biometrics systems.

Details

Information Discovery and Delivery, vol. 48 no. 1
Type: Research Article
ISSN: 2398-6247

Keywords

To view the access options for this content please click here
Article
Publication date: 7 August 2017

Shenglan Liu, Muxin Sun, Xiaodong Huang, Wei Wang and Feilong Wang

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion

Abstract

Purpose

Robot vision is a fundamental device for human–robot interaction and robot complex tasks. In this paper, the authors aim to use Kinect and propose a feature graph fusion (FGF) for robot recognition.

Design/methodology/approach

The feature fusion utilizes red green blue (RGB) and depth information to construct fused feature from Kinect. FGF involves multi-Jaccard similarity to compute a robust graph and word embedding method to enhance the recognition results.

Findings

The authors also collect DUT RGB-Depth (RGB-D) face data set and a benchmark data set to evaluate the effectiveness and efficiency of this method. The experimental results illustrate that FGF is robust and effective to face and object data sets in robot applications.

Originality/value

The authors first utilize Jaccard similarity to construct a graph of RGB and depth images, which indicates the similarity of pair-wise images. Then, fusion feature of RGB and depth images can be computed by the Extended Jaccard Graph using word embedding method. The FGF can get better performance and efficiency in RGB-D sensor for robots.

Details

Assembly Automation, vol. 37 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 1 September 2001

G. Simone and F.C. Morabito

A data fusion approach to the classification of eddy current and ultrasonic measurements is proposed in a context of defect detection/recognition methods for…

Abstract

A data fusion approach to the classification of eddy current and ultrasonic measurements is proposed in a context of defect detection/recognition methods for non‐destructive testing/evaluation systems: the purpose is to demonstrate that a multi‐sensor approach that combines the advantages carried by each sensor is able to locate potential cracks on the inspected specimen. Different approaches have been compared: a pixel level data fusion approach, that distinguishes between the defect area and the no‐defect areas, by means of the information carried by the intensity of each pixel of the eddy current and ultrasonic data; a feature level data fusion approach that uses the features computed on the measured data; a symbol level data fusion approach that extracts symbols from the two sensors as complementary information and classifies the data by using these symbols. The experimental results, carried out on an aluminium plate, pointed out the ability of the symbol level proposed approach to classify the input images within a minimum overall error, by taking into account the probability of detection and the probability of false alarm for the defect.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 20 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

To view the access options for this content please click here
Article
Publication date: 17 May 2021

Guoyuan Shi, Yingjie Zhang and Manni Zeng

Workpiece sorting is a key link in industrial production lines. The vision-based workpiece sorting system is non-contact and widely applicable. The detection and…

Abstract

Purpose

Workpiece sorting is a key link in industrial production lines. The vision-based workpiece sorting system is non-contact and widely applicable. The detection and recognition of workpieces are the key technologies of the workpiece sorting system. To introduce deep learning algorithms into workpiece detection and improve detection accuracy, this paper aims to propose a workpiece detection algorithm based on the single-shot multi-box detector (SSD).

Design/methodology/approach

Propose a multi-feature fused SSD network for fast workpiece detection. First, the multi-view CAD rendering images of the workpiece are used as deep learning data sets. Second, the visual geometry group network was trained for workpiece recognition to identify the category of the workpiece. Third, this study designs a multi-level feature fusion method to improve the detection accuracy of SSD (especially for small objects); specifically, a feature fusion module is added, which uses “element-wise sum” and “concatenation operation” to combine the information of shallow features and deep features.

Findings

Experimental results show that the actual workpiece detection accuracy of the method can reach 96% and the speed can reach 41 frames per second. Compared with the original SSD, the method improves the accuracy by 7% and improves the detection performance of small objects.

Originality/value

This paper innovatively introduces the SSD detection algorithm into workpiece detection in industrial scenarios and improves it. A feature fusion module has been added to combine the information of shallow features and deep features. The multi-feature fused SSD network proves the feasibility and practicality of introducing deep learning algorithms into workpiece sorting.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 4 June 2021

Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…

Abstract

Purpose

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.

Design/methodology/approach

Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.

Findings

The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.

Originality/value

A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 9 February 2021

Yaolin Zhu, Jiayi Huang, Tong Wu and Xueqin Ren

The purpose of this paper is to select the optimal feature parameters to further improve the identification accuracy of cashmere and wool.

Abstract

Purpose

The purpose of this paper is to select the optimal feature parameters to further improve the identification accuracy of cashmere and wool.

Design/methodology/approach

To increase the accuracy, the authors put forward a method selecting optimal parameters based on the fusion of morphological feature and texture feature. The first step is to acquire the fiber diameter measured by the central axis algorithm. The second step is to acquire the optimal texture feature parameters. This step is mainly achieved by using the variance of secondary statistics of these two texture features to get four statistics and then finding the impact factors of gray level co-occurrence matrix relying on the relationship between the secondary statistic values and the pixel pitch. Finally, the five-dimensional feature vectors extracted from the sample image are fed into the fisher classifier.

Findings

The improvement of identification accuracy can be achieved by determining the optimal feature parameters and fusing two texture features. The average identification accuracy is 96.713% in this paper, which is very helpful to improve the efficiency of detector in the textile industry.

Originality/value

In this paper, a novel identification method which extracts the optimal feature parameter is proposed.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

To view the access options for this content please click here
Article
Publication date: 21 June 2021

Zhoufeng Liu, Shanliang Liu, Chunlei Li and Bicao Li

This paper aims to propose a new method to solve the two problems in fabric defect detection. Current state-of-the-art industrial products defect detectors are deep…

Abstract

Purpose

This paper aims to propose a new method to solve the two problems in fabric defect detection. Current state-of-the-art industrial products defect detectors are deep learning-based, which incurs some additional problems: (1) The model is difficult to train due to too few fabric datasets for the difficulty of collecting pictures; (2) The detection accuracy of existing methods is insufficient to implement in the industrial field. This study intends to propose a new method which can be applied to fabric defect detection in the industrial field.

Design/methodology/approach

To cope with exist fabric defect detection problems, the article proposes a novel fabric defect detection method based on multi-source feature fusion. In the training process, both layer features and source model information are fused to enhance robustness and accuracy. Additionally, a novel training model called multi-source feature fusion (MSFF) is proposed to tackle the limited samples and demand to obtain fleet and precise quantification automatically.

Findings

The paper provides a novel fabric defect detection method, experimental results demonstrate that the proposed method achieves an AP of 93.9 and 98.8% when applied to the TILDA(a public dataset) and ZYFD datasets (a real-shot dataset), respectively, and outperforms 5.9% than fine-tuned SSD (single shot multi-box detector).

Research limitations/implications

Our proposed algorithm can provide a promising tool for fabric defect detection.

Practical implications

The paper includes implications for the development of a powerful brand image, the development of “brand ambassadors” and for managing the balance between stability and change.

Social implications

This work provides technical support for real-time detection on industrial sites, advances the process of intelligent manual detection of fabric defects and provides a technical reference for object detection on other industrial

Originality/value

Therefore, our proposed algorithm can provide a promising tool for fabric defect detection.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

To view the access options for this content please click here
Article
Publication date: 7 June 2021

Sixian Chan, Jian Tao, Xiaolong Zhou, Binghui Wu, Hongqiang Wang and Shengyong Chen

Visual tracking technology enables industrial robots interacting with human beings intelligently. However, due to the complexity of the tracking problem, the accuracy of…

Abstract

Purpose

Visual tracking technology enables industrial robots interacting with human beings intelligently. However, due to the complexity of the tracking problem, the accuracy of visual target tracking still has great space for improvement. This paper aims to propose an accurate visual target tracking method based on standard hedging and feature fusion.

Design/methodology/approach

For this study, the authors first learn the discriminative information between targets and similar objects in the histogram of oriented gradients by feature optimization method, and then use standard hedging algorithms to dynamically balance the weights between different feature optimization components. Moreover, they penalize the filter coefficients by incorporating spatial regularization coefficient and extend the Kernelized Correlation Filter for robust tracking. Finally, a model update mechanism to improve the effectiveness of the tracking is proposed.

Findings

Extensive experimental results demonstrate the superior performance of the proposed method comparing to the state-of-the-art tracking methods.

Originality/value

Improvements to existing visual target tracking algorithms are achieved through feature fusion and standard hedging algorithms to further improve the tracking accuracy of robots on targets in reality.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 8 February 2021

Zhifeng Wang, Chi Zuo and Chunyan Zeng

Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although…

Abstract

Purpose

Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although there are several useful methods proposed for double JPEG compression detection when the quantization matrices are different in the primary and secondary compression processes, it is still a difficult problem when the quantization matrices are the same. Moreover, those methods for the different or the same quantization matrices are implemented in independent ways. The paper aims to build a new unified framework for detecting the doubly JPEG compression.

Design/methodology/approach

First, the Y channel of JPEG images is cut into 8 × 8 nonoverlapping blocks, and two groups of features that characterize the artifacts caused by doubly JPEG compression with the same and the different quantization matrices are extracted on those blocks. Then, the Riemannian manifold learning is applied for dimensionality reduction while preserving the local intrinsic structure of the features. Finally, a deep stack autoencoder network with seven layers is designed to detect the doubly JPEG compression.

Findings

Experimental results with different quality factors have shown that the proposed approach performs much better than the state-of-the-art approaches.

Practical implications

To verify the integrity and authenticity of Web images, the research of double JPEG compression detection is increasingly paid more attentions.

Originality/value

This paper aims to propose a unified framework to detect the double JPEG compression in the scenario whether the quantization matrix is different or not, which means this approach can be applied in more practical Web forensics tasks.

Details

International Journal of Web Information Systems, vol. 17 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 4000