Search results

1 – 10 of over 9000
To view the access options for this content please click here
Article
Publication date: 14 August 2017

Sudeep Thepade, Rik Das and Saurav Ghosh

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image…

Abstract

Purpose

Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques.

Design/methodology/approach

Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work.

Findings

The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose.

Originality/value

To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 11 June 2020

Yuh-Min Chen, Tsung-Yi Chen and Lyu-Cian Chen

Location-based services (LBS) have become an effective commercial marketing tool. However, regarding retail store location selection, it is challenging to collect…

Abstract

Purpose

Location-based services (LBS) have become an effective commercial marketing tool. However, regarding retail store location selection, it is challenging to collect analytical data. In this study, location-based social network data are employed to develop a retail store recommendation method by analyzing the relationship between user footprint and point-of-interest (POI). According to the correlation analysis of the target area and the extraction of crowd mobility patterns, the features of retail store recommendation are constructed.

Design/methodology/approach

The industrial density, area category, clustering and area saturation calculations between POIs are designed. Methods such as Kernel Density Estimation and K-means are used to calculate the influence of the area relevance on the retail store selection.

Findings

The coffee retail industry is used as an example to analyze the retail location recommendation method and assess the accuracy of the method.

Research limitations/implications

This study is mainly limited by the size and density of the datasets. Owing to the limitations imposed by the location-based privacy policy, it is challenging to perform experimental verification using the latest data.

Originality/value

An industrial relevance questionnaire is designed, and the responses are arranged using a simple checklist to conveniently establish a method for filtering the industrial nature of the adjacent areas. The New York and Tokyo datasets from Foursquare and the Tainan city dataset from Facebook are employed for feature extraction and validation. A higher evaluation score is obtained compared with relevant studies with regard to the normalized discounted cumulative gain index.

Details

Online Information Review, vol. 45 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 12 June 2019

Shantanu Kumar Das and Abinash Kumar Swain

This paper aims to present the classification, representation and extraction of adhesively bonded assembly features (ABAFs) from the computer-aided design (CAD) model.

Abstract

Purpose

This paper aims to present the classification, representation and extraction of adhesively bonded assembly features (ABAFs) from the computer-aided design (CAD) model.

Design/methodology/approach

The ABAFs are represented as a set of faces with a characteristic arrangement among the faces among parts in proximity suitable for adhesive bonding. The characteristics combination of the faying surfaces and their topological relationships help in classification of ABAFs. The ABAFs are classified into elementary and compound types based on the number of assembly features exist at the joint location.

Findings

A set of algorithms is developed to extract and identify the ABAFs from CAD model. Typical automotive and aerospace CAD assembly models have been used to illustrate and validate the proposed approach.

Originality/value

New classification and extraction methods for ABAFs are proposed, which are useful for variant design.

Details

Assembly Automation, vol. 39 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 18 January 2016

Jia Yan, Shukai Duan, Tingwen Huang and Lidan Wang

The purpose of this paper is to improve the performance of E-nose in the detection of wound infection. Feature extraction and selection methods have a strong impact on the…

Abstract

Purpose

The purpose of this paper is to improve the performance of E-nose in the detection of wound infection. Feature extraction and selection methods have a strong impact on the performance of pattern classification of electronic nose (E-nose). A new hybrid feature matrix construction method and multi-objective binary quantum-behaved particle swarm optimization (BQPSO) have been proposed for feature extraction and selection of sensor array.

Design/methodology/approach

A hybrid feature matrix constructed by maximum value and wavelet coefficients is proposed to realize feature extraction. Multi-objective BQPSO whose fitness function contains classification accuracy and a number of selected sensors is used for feature selection. Quantum-behaved particle swarm optimization (QPSO) is used for synchronization optimization of selected features and parameter of classifier. Radical basis function (RBF) network is used for classification.

Findings

E-nose obtains the highest classification accuracy when the maximum value and db 5 wavelet coefficients are extracted as the hybrid features and only six sensors are selected for classification. All results make it clear that the proposed method is an ideal feature extraction and selection method of E-nose in the detection of wound infection.

Originality/value

The innovative concept improves the performance of E-nose in wound monitoring, and is beneficial for realizing the clinical application of E-nose.

Details

Sensor Review, vol. 36 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 15 May 2020

Farid Esmaeili, Hamid Ebadi, Mohammad Saadatseresht and Farzin Kalantary

Displacement measurement in large-scale structures (such as excavation walls) is one of the most important applications of close-range photogrammetry, in which achieving…

Abstract

Purpose

Displacement measurement in large-scale structures (such as excavation walls) is one of the most important applications of close-range photogrammetry, in which achieving high precision requires extracting and accurately matching local features from convergent images. The purpose of this study is to introduce a new multi-image pointing (MIP) algorithm is introduced based on the characteristics of the geometric model generated from the initial matching. This self-adaptive algorithm is used to correct and improve the accuracy of the extracted positions from local features in the convergent images.

Design/methodology/approach

In this paper, the new MIP algorithm based on the geometric characteristics of the model generated from the initial matching was introduced, which in a self-adaptive way corrected the extracted image coordinates. The unique characteristics of this proposed algorithm were that the position correction was accomplished with the help of continuous interaction between the 3D model coordinates and the image coordinates and that it had the least dependency on the geometric and radiometric nature of the images. After the initial feature extraction and implementation of the MIP algorithm, the image coordinates were ready for use in the displacement measurement process. The combined photogrammetry displacement adjustment (CPDA) algorithm was used for displacement measurement between two epochs. Micro-geodesy, target-based photogrammetry and the proposed MIP methods were used in a displacement measurement project for an excavation wall in the Velenjak area in Tehran, Iran, to evaluate the proposed algorithm performance. According to the results, the measurement accuracy of the point geo-coordinates of 8 mm and the displacement accuracy of 13 mm could be achieved using the MIP algorithm. In addition to the micro-geodesy method, the accuracy of the results was matched by the cracks created behind the project’s wall. Given the maximum allowable displacement limit of 4 cm in this project, the use of the MIP algorithm produced the required accuracy to determine the critical displacement in the project.

Findings

Evaluation of the results demonstrated that the accuracy of 8 mm in determining the position of the points on the feature and the accuracy of 13 mm in the displacement measurement of the excavation walls could be achieved using precise positioning of local features on images using the MIP algorithm.The proposed algorithm can be used in all applications that need to achieve high accuracy in determining the 3D coordinates of local features in close-range photogrammetry.

Originality/value

Some advantages of the proposed MIP photogrammetry algorithm, including the ease of obtaining observations and using local features on the structure in the images rather than installing the artificial targets, make it possible to effectively replace micro-geodesy and instrumentation methods. In addition, the proposed MIP method is superior to the target-based photogrammetric method because it does not need artificial target installation and protection. Moreover, in each photogrammetric application that needs to determine the exact point coordinates on the feature, the proposed algorithm can be very effective in providing the possibility to achieve the required accuracy according to the desired objectives.

To view the access options for this content please click here
Article
Publication date: 2 April 2019

Hei Chia Wang, Yu Hung Chiang and Yi Feng Sun

This paper aims to improve a sentiment analysis (SA) system to help users (i.e. customers or hotel managers) understand hotel evaluations. There are three main purposes in…

Abstract

Purpose

This paper aims to improve a sentiment analysis (SA) system to help users (i.e. customers or hotel managers) understand hotel evaluations. There are three main purposes in this paper: designing an unsupervised method for extracting online Chinese features and opinion pairs, distinguishing different intensities of polarity in opinion words and examining the changes in polarity in the time series.

Design/methodology/approach

In this paper, a review analysis system is proposed to automatically capture feature opinions experienced by other tourists presented in the review documents. In the system, a feature-level SA is designed to determine the polarity of these features. Moreover, an unsupervised method using a part-of-speech pattern clarification query and multi-lexicons SA to summarize all Chinese reviews is adopted.

Findings

The authors expect this method to help travellers search for what they want and make decisions more efficiently. The experimental results show the F-measure of the proposed method to be 0.628. It thus outperforms the methods used in previous studies.

Originality/value

The study is useful for travellers who want to quickly retrieve and summarize helpful information from the pool of messy hotel reviews. Meanwhile, the system will assist hotel managers to comprehensively understand service qualities with which guests are satisfied or dissatisfied.

Details

The Electronic Library , vol. 37 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 25 January 2018

Hima Bindu and Manjunathachari K.

This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days…

Abstract

Purpose

This paper aims to develop the Hybrid feature descriptor and probabilistic neuro-fuzzy system for attaining the high accuracy in face recognition system. In recent days, facial recognition (FR) systems play a vital part in several applications such as surveillance, access control and image understanding. Accordingly, various face recognition methods have been developed in the literature, but the applicability of these algorithms is restricted because of unsatisfied accuracy. So, the improvement of face recognition is significantly important for the current trend.

Design/methodology/approach

This paper proposes a face recognition system through feature extraction and classification. The proposed model extracts the local and the global feature of the image. The local features of the image are extracted using the kernel based scale invariant feature transform (K-SIFT) model and the global features are extracted using the proposed m-Co-HOG model. (Co-HOG: co-occurrence histograms of oriented gradients) The proposed m-Co-HOG model has the properties of the Co-HOG algorithm. The feature vector database contains combined local and the global feature vectors derived using the K-SIFT model and the proposed m-Co-HOG algorithm. This paper proposes a probabilistic neuro-fuzzy classifier system for the finding the identity of the person from the extracted feature vector database.

Findings

The face images required for the simulation of the proposed work are taken from the CVL database. The simulation considers a total of 114 persons form the CVL database. From the results, it is evident that the proposed model has outperformed the existing models with an improved accuracy of 0.98. The false acceptance rate (FAR) and false rejection rate (FRR) values of the proposed model have a low value of 0.01.

Originality/value

This paper proposes a face recognition system with proposed m-Co-HOG vector and the hybrid neuro-fuzzy classifier. Feature extraction was based on the proposed m-Co-HOG vector for extracting the global features and the existing K-SIFT model for extracting the local features from the face images. The proposed m-Co-HOG vector utilizes the existing Co-HOG model for feature extraction, along with a new color gradient decomposition method. The major advantage of the proposed m-Co-HOG vector is that it utilizes the color features of the image along with other features during the histogram operation.

Details

Sensor Review, vol. 38 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 26 March 2021

Hima Bindu Valiveti, Anil Kumar B., Lakshmi Chaitanya Duggineni, Swetha Namburu and Swaraja Kuraparthi

Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video…

Abstract

Purpose

Road accidents, an inadvertent mishap can be detected automatically and alerts sent instantly with the collaboration of image processing techniques and on-road video surveillance systems. However, to rely exclusively on visual information especially under adverse conditions like night times, dark areas and unfavourable weather conditions such as snowfall, rain, and fog which result in faint visibility lead to incertitude. The main goal of the proposed work is certainty of accident occurrence.

Design/methodology/approach

The authors of this work propose a method for detecting road accidents by analyzing audio signals to identify hazardous situations such as tire skidding and car crashes. The motive of this project is to build a simple and complete audio event detection system using signal feature extraction methods to improve its detection accuracy. The experimental analysis is carried out on a publicly available real time data-set consisting of audio samples like car crashes and tire skidding. The Temporal features of the recorded audio signal like Energy Volume Zero Crossing Rate 28ZCR2529 and the Spectral features like Spectral Centroid Spectral Spread Spectral Roll of factor Spectral Flux the Psychoacoustic features Energy Sub Bands ratio and Gammatonegram are computed. The extracted features are pre-processed and trained and tested using Support Vector Machine (SVM) and K-nearest neighborhood (KNN) classification algorithms for exact prediction of the accident occurrence for various SNR ranges. The combination of Gammatonegram with Temporal and Spectral features of the validates to be superior compared to the existing detection techniques.

Findings

Temporal, Spectral, Psychoacoustic features, gammetonegram of the recorded audio signal are extracted. A High level vector is generated based on centroid and the extracted features are classified with the help of machine learning algorithms like SVM, KNN and DT. The audio samples collected have varied SNR ranges and the accuracy of the classification algorithms is thoroughly tested.

Practical implications

Denoising of the audio samples for perfect feature extraction was a tedious chore.

Originality/value

The existing literature cites extraction of Temporal and Spectral features and then the application of classification algorithms. For perfect classification, the authors have chosen to construct a high level vector from all the four extracted Temporal, Spectral, Psycho acoustic and Gammetonegram features. The classification algorithms are employed on samples collected at varied SNR ranges.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 3 November 2020

Femi Emmanuel Ayo, Olusegun Folorunso, Friday Thomas Ibharalu and Idowu Ademola Osinuga

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection…

Abstract

Purpose

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with social media data has witnessed special research attention in recent studies, hence, the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.

Design/methodology/approach

This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data. The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency (TF-IDF) for word-level feature extraction and Long Short Term Memory (LSTM) which is a variant of recurrent neural networks architecture for sentence-level feature extraction. The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech, offensive language or neither.

Findings

The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods. In order to validate the performances of the proposed method, t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection. Furthermore, Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.

Research limitations/implications

Finally, the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.

Originality/value

The main novelty of this study is the use of an automatic topic spotting measure based on naïve Bayes model to improve features representation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 2 July 2020

N. Venkata Sailaja, L. Padmasree and N. Mangathayaru

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the…

Abstract

Purpose

Text mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text mining is adopting the incremental learning data, as it is economical while dealing with large volume of information.

Design/methodology/approach

The primary intention of this research is to design and develop a technique for incremental text categorization using optimized Support Vector Neural Network (SVNN). The proposed technique involves four major steps, such as pre-processing, feature selection, classification and feature extraction. Initially, the data is pre-processed based on stop word removal and stemming. Then, the feature extraction is done by extracting semantic word-based features and Term Frequency and Inverse Document Frequency (TF-IDF). From the extracted features, the important features are selected using Bhattacharya distance measure and the features are subjected as the input to the proposed classifier. The proposed classifier performs incremental learning using SVNN, wherein the weights are bounded in a limit using rough set theory. Moreover, for the optimal selection of weights in SVNN, Moth Search (MS) algorithm is used. Thus, the proposed classifier, named Rough set MS-SVNN, performs the text categorization for the incremental data, given as the input.

Findings

For the experimentation, the 20 News group dataset, and the Reuters dataset are used. Simulation results indicate that the proposed Rough set based MS-SVNN has achieved 0.7743, 0.7774 and 0.7745 for the precision, recall and F-measure, respectively.

Originality/value

In this paper, an online incremental learner is developed for the text categorization. The text categorization is done by developing the Rough set MS-SVNN classifier, which classifies the incoming texts based on the boundary condition evaluated by the Rough set theory, and the optimal weights from the MS. The proposed online text categorization scheme has the basic steps, like pre-processing, feature extraction, feature selection and classification. The pre-processing is carried out to identify the unique words from the dataset, and the features like semantic word-based features and TF-IDF are obtained from the keyword set. Feature selection is done by setting a minimum Bhattacharya distance measure, and the selected features are provided to the proposed Rough set MS-SVNN for the classification.

Details

Data Technologies and Applications, vol. 54 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of over 9000