Search results

1 – 10 of 120
Article
Publication date: 2 November 2023

Khaled Hamed Alyoubi, Fahd Saleh Alotaibi, Akhil Kumar, Vishal Gupta and Akashdeep Sharma

The purpose of this paper is to describe a new approach to sentence representation learning leading to text classification using Bidirectional Encoder Representations from

Abstract

Purpose

The purpose of this paper is to describe a new approach to sentence representation learning leading to text classification using Bidirectional Encoder Representations from Transformers (BERT) embeddings. This work proposes a novel BERT-convolutional neural network (CNN)-based model for sentence representation learning and text classification. The proposed model can be used by industries that work in the area of classification of similarity scores between the texts and sentiments and opinion analysis.

Design/methodology/approach

The approach developed is based on the use of the BERT model to provide distinct features from its transformer encoder layers to the CNNs to achieve multi-layer feature fusion. To achieve multi-layer feature fusion, the distinct feature vectors of the last three layers of the BERT are passed to three separate CNN layers to generate a rich feature representation that can be used for extracting the keywords in the sentences. For sentence representation learning and text classification, the proposed model is trained and tested on the Stanford Sentiment Treebank-2 (SST-2) data set for sentiment analysis and the Quora Question Pair (QQP) data set for sentence classification. To obtain benchmark results, a selective training approach has been applied with the proposed model.

Findings

On the SST-2 data set, the proposed model achieved an accuracy of 92.90%, whereas, on the QQP data set, it achieved an accuracy of 91.51%. For other evaluation metrics such as precision, recall and F1 Score, the results obtained are overwhelming. The results with the proposed model are 1.17%–1.2% better as compared to the original BERT model on the SST-2 and QQP data sets.

Originality/value

The novelty of the proposed model lies in the multi-layer feature fusion between the last three layers of the BERT model with CNN layers and the selective training approach based on gated pruning to achieve benchmark results.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 28 March 2023

Antonijo Marijić and Marina Bagić Babac

Genre classification of songs based on lyrics is a challenging task even for humans, however, state-of-the-art natural language processing has recently offered advanced solutions…

Abstract

Purpose

Genre classification of songs based on lyrics is a challenging task even for humans, however, state-of-the-art natural language processing has recently offered advanced solutions to this task. The purpose of this study is to advance the understanding and application of natural language processing and deep learning in the domain of music genre classification, while also contributing to the broader themes of global knowledge and communication, and sustainable preservation of cultural heritage.

Design/methodology/approach

The main contribution of this study is the development and evaluation of various machine and deep learning models for song genre classification. Additionally, we investigated the effect of different word embeddings, including Global Vectors for Word Representation (GloVe) and Word2Vec, on the classification performance. The tested models range from benchmarks such as logistic regression, support vector machine and random forest, to more complex neural network architectures and transformer-based models, such as recurrent neural network, long short-term memory, bidirectional long short-term memory and bidirectional encoder representations from transformers (BERT).

Findings

The authors conducted experiments on both English and multilingual data sets for genre classification. The results show that the BERT model achieved the best accuracy on the English data set, whereas cross-lingual language model pretraining based on RoBERTa (XLM-RoBERTa) performed the best on the multilingual data set. This study found that songs in the metal genre were the most accurately labeled, as their text style and topics were the most distinct from other genres. On the contrary, songs from the pop and rock genres were more challenging to differentiate. This study also compared the impact of different word embeddings on the classification task and found that models with GloVe word embeddings outperformed Word2Vec and the learning embedding layer.

Originality/value

This study presents the implementation, testing and comparison of various machine and deep learning models for genre classification. The results demonstrate that transformer models, including BERT, robustly optimized BERT pretraining approach, distilled bidirectional encoder representations from transformers, bidirectional and auto-regressive transformers and XLM-RoBERTa, outperformed other models.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 2 August 2022

Zhongbao Liu and Wenjuan Zhao

The research on structure function recognition mainly concentrates on identifying a specific part of academic literature and its applicability in the multidiscipline perspective…

Abstract

Purpose

The research on structure function recognition mainly concentrates on identifying a specific part of academic literature and its applicability in the multidiscipline perspective. A specific part of academic literature, such as sentences, paragraphs and chapter contents are also called a level of academic literature in this paper. There are a few comparative research works on the relationship between models, disciplines and levels in the process of structure function recognition. In view of this, comparative research on structure function recognition based on deep learning has been conducted in this paper.

Design/methodology/approach

An experimental corpus, including the academic literature of traditional Chinese medicine, library and information science, computer science, environmental science and phytology, was constructed. Meanwhile, deep learning models such as convolutional neural networks (CNN), long and short-term memory (LSTM) and bidirectional encoder representation from transformers (BERT) were used. The comparative experiments of structure function recognition were conducted with the help of the deep learning models from the multilevel perspective.

Findings

The experimental results showed that (1) the BERT model performed best, with F1 values of 78.02, 89.41 and 94.88%, respectively at the level of sentence, paragraph and chapter content. (2) The deep learning models performed better on the academic literature of traditional Chinese medicine than on other disciplines in most cases, e.g. F1 values of CNN, LSTM and BERT, respectively arrived at 71.14, 69.96 and 78.02% at the level of sentence. (3) The deep learning models performed better at the level of chapter content than other levels, the maximum F1 values of CNN, LSTM and BERT at 91.92, 74.90 and 94.88%, respectively. Furthermore, the confusion matrix of recognition results on the academic literature was introduced to find out the reason for misrecognition.

Originality/value

This paper may inspire other research on structure function recognition, and provide a valuable reference for the analysis of influencing factors.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 29 August 2023

Qingqing Li, Ziming Zeng, Shouqiang Sun, Chen Cheng and Yingqi Zeng

The paper aims to construct a spatiotemporal situational awareness framework to sense the evolutionary situation of public opinion in social media, thus assisting relevant…

Abstract

Purpose

The paper aims to construct a spatiotemporal situational awareness framework to sense the evolutionary situation of public opinion in social media, thus assisting relevant departments in formulating public opinion control measures for specific time and space contexts.

Design/methodology/approach

The spatiotemporal situational awareness framework comprises situational element extraction, situational understanding and situational projection. In situational element extraction, the data on the COVID-19 vaccine, including spatiotemporal tags and text contents, is extracted. In situational understanding, the bidirectional encoder representation from transformers – latent dirichlet allocation (BERT-LDA) and bidirectional encoder representation from transformersbidirectional long short-term memory (BERT-BiLSTM) are used to discover the topics and emotional labels hidden in opinion texts. In situational projection, the situational evolution characteristics and patterns of online public opinion are uncovered from the perspective of time and space through multiple visualisation techniques.

Findings

From the temporal perspective, the evolution of online public opinion is closely related to the developmental dynamics of offline events. In comparison, public views and attitudes are more complex and diversified during the outbreak and diffusion periods. From the spatial perspective, the netizens in hotspot areas with higher discussion volume are more rational and prefer to track the whole process of event development, while the ones in coldspot areas with less discussion volume pay more attention to the expression of personal emotions. From the perspective of intertwined spatiotemporal, there are differences in the focus of attention and emotional state of netizens in different regions and time stages, caused by the specific situations they are in.

Originality/value

The situational awareness framework can shed light on the dynamic evolution of online public opinion from a multidimensional perspective, including temporal, spatial and spatiotemporal perspectives. It enables decision-makers to grasp the psychology and behavioural patterns of the public in different regions and time stages and provide targeted public opinion guidance measures and offline event governance strategies.

Details

The Electronic Library , vol. 41 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2941

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 19 September 2022

Srishti Sharma, Mala Saraswat and Anil Kumar Dubey

Owing to the increased accessibility of internet and related technologies, more and more individuals across the globe now turn to social media for their daily dose of news rather…

Abstract

Purpose

Owing to the increased accessibility of internet and related technologies, more and more individuals across the globe now turn to social media for their daily dose of news rather than traditional news outlets. With the global nature of social media and hardly any checks in place on posting of content, exponential increase in spread of fake news is easy. Businesses propagate fake news to improve their economic standing and influencing consumers and demand, and individuals spread fake news for personal gains like popularity and life goals. The content of fake news is diverse in terms of topics, styles and media platforms, and fake news attempts to distort truth with diverse linguistic styles while simultaneously mocking true news. All these factors together make fake news detection an arduous task. This work tried to check the spread of disinformation on Twitter.

Design/methodology/approach

This study carries out fake news detection using user characteristics and tweet textual content as features. For categorizing user characteristics, this study uses the XGBoost algorithm. To classify the tweet text, this study uses various natural language processing techniques to pre-process the tweets and then apply a hybrid convolutional neural network–recurrent neural network (CNN-RNN) and state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) transformer.

Findings

This study uses a combination of machine learning and deep learning approaches for fake news detection, namely, XGBoost, hybrid CNN-RNN and BERT. The models have also been evaluated and compared with various baseline models to show that this approach effectively tackles this problem.

Originality/value

This study proposes a novel framework that exploits news content and social contexts to learn useful representations for predicting fake news. This model is based on a transformer architecture, which facilitates representation learning from fake news data and helps detect fake news easily. This study also carries out an investigative study on the relative importance of content and social context features for the task of detecting false news and whether absence of one of these categories of features hampers the effectiveness of the resultant system. This investigation can go a long way in aiding further research on the subject and for fake news detection in the presence of extremely noisy or unusable data.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 26 August 2022

William Harly and Abba Suganda Girsang

With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement…

Abstract

Purpose

With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement between arguments to represent both stance polarity and intensity, two important aspects in analyzing arguments. However, this study primarily focused on finetuning bidirectional encoder representations from transformer (BERT) model. The purpose of this paper is to propose convolutional neural network (CNN)-BERT architecture to improve the previous method.

Design/methodology/approach

The used CNN-BERT architecture in this paper directly uses the generated hidden representation from BERT. This allows for better use of the pretrained BERT model and makes finetuning the pretrained BERT model optional. The authors then compared the CNN-BERT architecture with the method proposed in the previous study (BERT and Siamese-BERT).

Findings

Experiment results demonstrate that the proposed CNN-BERT is able to achieve a 71.87% accuracy in measuring agreement between arguments. Compared to the previous study that achieve an accuracy of 68.58%, the CNN-BERT architecture was able to increase the accuracy by 3.29%. The CNN-BERT architecture is also able to achieve a similar result even without further pretraining the BERT model.

Originality/value

The principal originality of this paper is the proposition of using CNN-BERT to better use the pretrained BERT model for measuring agreement between arguments. The proposed method is able to improve performance and also able to achieve a similar result without further training the BERT model. This allows separation of the BERT model from the CNN classifier, which significantly reduces the model size and allows the usage of the same pretrained BERT model for other problems that also did not need to finetune their BERT model.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 24 September 2020

Toshiki Tomihira, Atsushi Otsuka, Akihiro Yamashita and Tetsuji Satoh

Recently, Unicode has been standardized with the penetration of social networking services, the use of emojis has become common. Emojis, as they are also known, are most effective…

Abstract

Purpose

Recently, Unicode has been standardized with the penetration of social networking services, the use of emojis has become common. Emojis, as they are also known, are most effective in expressing emotions in sentences. Sentiment analysis in natural language processing manually labels emotions for sentences. The authors can predict sentiment using emoji of text posted on social media without labeling manually. The purpose of this paper is to propose a new model that learns from sentences using emojis as labels, collecting English and Japanese tweets from Twitter as the corpus. The authors verify and compare multiple models based on attention long short-term memory (LSTM) and convolutional neural networks (CNN) and Bidirectional Encoder Representations from Transformers (BERT).

Design/methodology/approach

The authors collected 2,661 kinds of emoji registered as Unicode characters from tweets using Twitter application programming interface. It is a total of 6,149,410 tweets in Japanese. First, the authors visualized a vector space produced by the emojis by Word2Vec. In addition, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. Second, it involves entering a line of tweets containing emojis, learning and testing with that emoji as a label. The authors compared the BERT model with the conventional models [CNN, FastText and Attention bidirectional long short-term memory (BiLSTM)] that were high scores in the previous study.

Findings

Visualized the vector space of Word2Vec, the authors found that emojis and similar meaning words of emojis are adjacent and verify that emoji can be used for sentiment analysis. The authors obtained a higher score with BERT models compared to the conventional model. Therefore, the sophisticated experiments demonstrate that they improved the score over the conventional model in two languages. General emoji prediction is greatly influenced by context. In addition, the score may be lowered due to a misunderstanding of meaning. By using BERT based on a bi-directional transformer, the authors can consider the context.

Practical implications

The authors can find emoji in the output words by typing a word using an input method editor (IME). The current IME only considers the most latest inputted word, although it is possible to recommend emojis considering the context of the inputted sentence in this study. Therefore, the research can be used to improve IME performance in the future.

Originality/value

In the paper, the authors focus on multilingual emoji prediction. This is the first attempt of comparison at emoji prediction between Japanese and English. In addition, it is also the first attempt to use the BERT model based on the transformer for predicting limited emojis although the transformer is known to be effective for various NLP tasks. The authors found that a bidirectional transformer is suitable for emoji prediction.

Details

International Journal of Web Information Systems, vol. 16 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 February 2022

Deepak Suresh Asudani, Naresh Kumar Nagwani and Pradeep Singh

Classifying emails as ham or spam based on their content is essential. Determining the semantic and syntactic meaning of words and putting them in a high-dimensional feature…

372

Abstract

Purpose

Classifying emails as ham or spam based on their content is essential. Determining the semantic and syntactic meaning of words and putting them in a high-dimensional feature vector form for processing is the most difficult challenge in email categorization. The purpose of this paper is to examine the effectiveness of the pre-trained embedding model for the classification of emails using deep learning classifiers such as the long short-term memory (LSTM) model and convolutional neural network (CNN) model.

Design/methodology/approach

In this paper, global vectors (GloVe) and Bidirectional Encoder Representations Transformers (BERT) pre-trained word embedding are used to identify relationships between words, which helps to classify emails into their relevant categories using machine learning and deep learning models. Two benchmark datasets, SpamAssassin and Enron, are used in the experimentation.

Findings

In the first set of experiments, machine learning classifiers, the support vector machine (SVM) model, perform better than other machine learning methodologies. The second set of experiments compares the deep learning model performance without embedding, GloVe and BERT embedding. The experiments show that GloVe embedding can be helpful for faster execution with better performance on large-sized datasets.

Originality/value

The experiment reveals that the CNN model with GloVe embedding gives slightly better accuracy than the model with BERT embedding and traditional machine learning algorithms to classify an email as ham or spam. It is concluded that the word embedding models improve email classifiers accuracy.

Details

Data Technologies and Applications, vol. 56 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 17 May 2023

Tong Yang, Jie Wu and Junming Zhang

This study aims to establish a comprehensive satisfaction analysis framework by mining online restaurant reviews, which can not only accurately reveal consumer satisfaction but…

Abstract

Purpose

This study aims to establish a comprehensive satisfaction analysis framework by mining online restaurant reviews, which can not only accurately reveal consumer satisfaction but also identify factors leading to dissatisfaction and further quantify improvement opportunity levels.

Design/methodology/approach

Adopting deep learning, Cross-Bidirectional Encoder Representations Transformers (BERT) model is developed to measure customer satisfaction. Furthermore, opinion mining technique is used to extract consumers’ opinions and obtain dissatisfaction factors. Furthermore, the opportunity algorithm is introduced to quantify attributes’ improvement opportunity levels. A total of 19,133 online reviews of 31 restaurants in Universal Beijing Resort are crawled to validate the framework.

Findings

Results demonstrate the superiority of Cross-BERT model compared to existing models such as sentiment lexicon-based model and Naïve Bayes. More importantly, after effectively unveiling customer dissatisfaction factors (e.g. long queuing time and taste salty), “Dish taste,” “Waiters’ attitude” and “Decoration” are identified as the three secondary attributes with the greatest improvement opportunities.

Practical implications

The proposed framework helps managers, especially in the restaurant industry, accurately understand customer satisfaction and reasons behind dissatisfaction, thereby generating efficient countermeasures. Especially, the improvement opportunity levels also benefit practitioners in efficiently allocating limited business resources.

Originality/value

This work contributes to hospitality and tourism literature by developing a comprehensive customer satisfaction analysis framework in the big data era. Moreover, to the best of the authors’ knowledge, this work is among the first to introduce opportunity algorithm to quantify service improvement benefits. The proposed Cross-BERT model also advances the methodological literature on measuring customer satisfaction.

Details

International Journal of Contemporary Hospitality Management, vol. 36 no. 3
Type: Research Article
ISSN: 0959-6119

Keywords

1 – 10 of 120