Search results

1 – 10 of over 48000
Article
Publication date: 2 November 2023

Khaled Hamed Alyoubi, Fahd Saleh Alotaibi, Akhil Kumar, Vishal Gupta and Akashdeep Sharma

The purpose of this paper is to describe a new approach to sentence representation learning leading to text classification using Bidirectional Encoder Representations from…

Abstract

Purpose

The purpose of this paper is to describe a new approach to sentence representation learning leading to text classification using Bidirectional Encoder Representations from Transformers (BERT) embeddings. This work proposes a novel BERT-convolutional neural network (CNN)-based model for sentence representation learning and text classification. The proposed model can be used by industries that work in the area of classification of similarity scores between the texts and sentiments and opinion analysis.

Design/methodology/approach

The approach developed is based on the use of the BERT model to provide distinct features from its transformer encoder layers to the CNNs to achieve multi-layer feature fusion. To achieve multi-layer feature fusion, the distinct feature vectors of the last three layers of the BERT are passed to three separate CNN layers to generate a rich feature representation that can be used for extracting the keywords in the sentences. For sentence representation learning and text classification, the proposed model is trained and tested on the Stanford Sentiment Treebank-2 (SST-2) data set for sentiment analysis and the Quora Question Pair (QQP) data set for sentence classification. To obtain benchmark results, a selective training approach has been applied with the proposed model.

Findings

On the SST-2 data set, the proposed model achieved an accuracy of 92.90%, whereas, on the QQP data set, it achieved an accuracy of 91.51%. For other evaluation metrics such as precision, recall and F1 Score, the results obtained are overwhelming. The results with the proposed model are 1.17%–1.2% better as compared to the original BERT model on the SST-2 and QQP data sets.

Originality/value

The novelty of the proposed model lies in the multi-layer feature fusion between the last three layers of the BERT model with CNN layers and the selective training approach based on gated pruning to achieve benchmark results.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 December 1994

Chin‐Sheng Chen and Jintong Wu

Addresses the need for a unified product information model and presentsa new representation scheme for mechanical component modelling usingshells as its principal geometric…

378

Abstract

Addresses the need for a unified product information model and presents a new representation scheme for mechanical component modelling using shells as its principal geometric primitives for modelling form features. The representation scheme was implemented using the ACIS geometric modeller and C++ on a SUN SPARC/10 station. The advantage of using shells is that both surface and volume information of a form feature can be derived from a shell. Different levels of product data representation can be integrated into a single model. Therefore, it allows the user to model the geometry effectively and form features of a mechanical part on one system.

Details

Integrated Manufacturing Systems, vol. 5 no. 4/5
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 3 August 2012

Chih‐Fong Tsai and Wei‐Chao Lin

Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level…

Abstract

Purpose

Content‐based image retrieval suffers from the semantic gap problem: that images are represented by low‐level visual features, which are difficult to directly match to high‐level concepts in the user's mind during retrieval. To date, visual feature representation is still limited in its ability to represent semantic image content accurately. This paper seeks to address these issues.

Design/methodology/approach

In this paper the authors propose a novel meta‐feature feature representation method for scenery image retrieval. In particular some class‐specific distances (namely meta‐features) between low‐level image features are measured. For example the distance between an image and its class centre, and the distances between the image and its nearest and farthest images in the same class, etc.

Findings

Three experiments based on 190 concrete, 130 abstract, and 610 categories in the Corel dataset show that the meta‐features extracted from both global and local visual features significantly outperform the original visual features in terms of mean average precision.

Originality/value

Compared with traditional local and global low‐level features, the proposed meta‐features have higher discriminative power for distinguishing a large number of conceptual categories for scenery image retrieval. In addition the meta‐features can be directly applied to other image descriptors, such as bag‐of‐words and contextual features.

Article
Publication date: 27 August 2024

Jingyi Zhao and Mingjun Xin

The purpose of this paper is to present a method that addresses the data sparsity problem in points of interest (POI) recommendation by introducing spatiotemporal context features

Abstract

Purpose

The purpose of this paper is to present a method that addresses the data sparsity problem in points of interest (POI) recommendation by introducing spatiotemporal context features based on location-based social network (LBSN) data. The objective is to improve the accuracy and effectiveness of POI recommendations by considering both spatial and temporal aspects.

Design/methodology/approach

To achieve this, the paper introduces a model that integrates the spatiotemporal context of POI records and spatiotemporal transition learning. The model uses graph convolutional embedding to embed spatiotemporal context information into feature vectors. Additionally, a recurrent neural network is used to represent the transitions of spatiotemporal context, effectively capturing the user’s spatiotemporal context and its changing trends. The proposed method combines long-term user preferences modeling with spatiotemporal context modeling to achieve POI recommendations based on a joint representation and transition of spatiotemporal context.

Findings

Experimental results demonstrate that the proposed method outperforms existing methods. By incorporating spatiotemporal context features, the approach addresses the issue of incomplete modeling of spatiotemporal context features in POI recommendations. This leads to improved recommendation accuracy and alleviation of the data sparsity problem.

Practical implications

The research has practical implications for enhancing the recommendation systems used in various location-based applications. By incorporating spatiotemporal context, the proposed method can provide more relevant and personalized recommendations, improving the user experience and satisfaction.

Originality/value

The paper’s contribution lies in the incorporation of spatiotemporal context features into POI records, considering the joint representation and transition of spatiotemporal context. This novel approach fills the gap left by existing methods that typically separate spatial and temporal modeling. The research provides valuable insights into improving the effectiveness of POI recommendation systems by leveraging spatiotemporal information.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 13 June 2008

Chih‐Fong Tsai and David C. Yen

Image classification or more specifically, annotating images with keywords is one of the important steps during image database indexing. However, the problem with current research…

Abstract

Purpose

Image classification or more specifically, annotating images with keywords is one of the important steps during image database indexing. However, the problem with current research in terms of image retrieval is more concentrated on how conceptual categories can be well represented by extracted, low level features for an effective classification. Consequently, image features representation including segmentation and low‐level feature extraction schemes must be genuinely effective to facilitate the process of classification. The purpose of this paper is to examine the effect on annotation effectiveness of using different (local) feature representation methods to map into conceptual categories.

Design/methodology/approach

This paper compares tiling (five and nine tiles) and regioning (five and nine regions) segmentation schemes and the extraction of combinations of color, texture, and edge features in terms of the effectiveness of a particular benchmark, automatic image annotation set up. Differences between effectiveness on concrete or abstract conceptual categories or keywords are further investigated, and progress towards establishing a particular benchmark approach is also reported.

Findings

In the context of local feature representation, the paper concludes that the combined color and texture features are the best to use for the five tiling and regioning schemes, and this evidence would form a good benchmark for future studies. Another interesting finding (but perhaps not surprising) is that when the number of concrete and abstract keywords increases or it is large (e.g. 100), abstract keywords are more difficult to assign correctly than the concrete ones.

Research limitations/implications

Future work could consider: conduct user‐centered evaluation instead of evaluation only by some chosen ground truth dataset, such as Corel, since this might impact effectiveness results; use of different numbers of categories for scalability analysis of image annotation as well as larger numbers of training and testing examples; use of Principle Component Analysis or Independent Component Analysis, or indeed machine learning techniques for low‐level feature selection; use of other segmentation schemes, especially more complex tiling schemes and other regioning schemes; use of different datasets, use of other low‐level features and/or combination of them; use of other machine learning techniques.

Originality/value

This paper is a good start for analyzing the mapping between some feature representation methods and various conceptual categories for future image annotation research.

Details

Library Hi Tech, vol. 26 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 12 June 2019

Shantanu Kumar Das and Abinash Kumar Swain

This paper aims to present the classification, representation and extraction of adhesively bonded assembly features (ABAFs) from the computer-aided design (CAD) model.

Abstract

Purpose

This paper aims to present the classification, representation and extraction of adhesively bonded assembly features (ABAFs) from the computer-aided design (CAD) model.

Design/methodology/approach

The ABAFs are represented as a set of faces with a characteristic arrangement among the faces among parts in proximity suitable for adhesive bonding. The characteristics combination of the faying surfaces and their topological relationships help in classification of ABAFs. The ABAFs are classified into elementary and compound types based on the number of assembly features exist at the joint location.

Findings

A set of algorithms is developed to extract and identify the ABAFs from CAD model. Typical automotive and aerospace CAD assembly models have been used to illustrate and validate the proposed approach.

Originality/value

New classification and extraction methods for ABAFs are proposed, which are useful for variant design.

Details

Assembly Automation, vol. 39 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 19 December 2023

Jinchao Huang

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…

Abstract

Purpose

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.

Design/methodology/approach

To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.

Findings

Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.

Originality/value

This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 November 2020

Femi Emmanuel Ayo, Olusegun Folorunso, Friday Thomas Ibharalu and Idowu Ademola Osinuga

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with…

Abstract

Purpose

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with social media data has witnessed special research attention in recent studies, hence, the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.

Design/methodology/approach

This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data. The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency (TF-IDF) for word-level feature extraction and Long Short Term Memory (LSTM) which is a variant of recurrent neural networks architecture for sentence-level feature extraction. The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech, offensive language or neither.

Findings

The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods. In order to validate the performances of the proposed method, t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection. Furthermore, Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.

Research limitations/implications

Finally, the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.

Originality/value

The main novelty of this study is the use of an automatic topic spotting measure based on naïve Bayes model to improve features representation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 1 March 2002

Yong Yue, Lian Ding, Kemal Ahmet, John Painter and Mick Walters

Computer aided process planning (CAPP) is an effective way to integrate computer aided design and manufacturing (CAD/CAM). There are two key issues with the integration: design…

1004

Abstract

Computer aided process planning (CAPP) is an effective way to integrate computer aided design and manufacturing (CAD/CAM). There are two key issues with the integration: design input in a feature‐based model and acquisition and representation of process knowledge especially empirical knowledge. This paper presents a state of the art review of research in computer integrated manufacturing using neural network techniques. Neural network‐based methods can eliminate some drawbacks of the conventional approaches, and therefore have attracted research attention particularly in recent years. The four main issues related to the neural network‐based techniques, namely the topology of the neural network, input representation, the training method and the output format are discussed with the current systems. The outcomes of research using neural network techniques are studied, and the limitations and future work are outlined.

Details

Engineering Computations, vol. 19 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 48000