Search results

1 – 4 of 4
Open Access
Article
Publication date: 27 February 2024

Mehmet Emin Bakir, Tracie Farrell and Kalina Bontcheva

The authors investigate how COVID-19 has influenced the amount, type or topics of abuse that UK politicians receive when engaging with the public.

Abstract

Purpose

The authors investigate how COVID-19 has influenced the amount, type or topics of abuse that UK politicians receive when engaging with the public.

Design/methodology/approach

This work covers the first year of COVID-19 in the UK, from March 2020 to March 2021 and analyses Twitter abuse in replies to UK MPs. The authors collected and analysed 17.9 million reply tweets to the MPs. The authors present overall abuse levels during different key moments of the pandemic, analysing reactions to MPs by gender and the relationship between online abuse and topics such as Brexit, the government’s COVID-19 response and policies, and social issues.

Findings

The authors have found that abuse levels towards UK MPs were at an all-time high in December 2020. Women (particularly those from non-White backgrounds) receive unusual amounts of abuse, targeting their credibility and capacity to do their jobs. Similar to other large events like general elections and Brexit, COVID-19 has elevated abuse levels, at least temporarily.

Originality/value

Previous studies analysed abuse levels towards MPs in the run-up to the 2017 and 2019 UK General Elections and during the first four months of the COVID-19 pandemic in the UK. The authors compare previous findings with those of the first year of COVID-19, as the pandemic persisted, and Brexit was forthcoming. This research not only contributes to the longitudinal comparison of abuse trends against UK politicians but also presents new findings, corroborates, further clarifies and raises questions about the previous findings.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-07-2022-0392

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

3236

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 28 March 2024

Hiep-Hung Pham, Ngoc-Thi Nhu Nguyen, Luong Dinh Hai, Tien-Trung Nguyen and Van An Le Nguyen

With the advancement of technology, microlearning has emerged as a promising method to improve the efficacy of teaching and learning. This study aims to investigate the document…

Abstract

Purpose

With the advancement of technology, microlearning has emerged as a promising method to improve the efficacy of teaching and learning. This study aims to investigate the document types, volume, growth trajectory, geographic contribution, coauthor relationships, prominent authors, research groups, influential documents and publication outlets in the microlearning literature.

Design/methodology/approach

We adapt the PRISMA guidelines to assess the eligibility of 297 Scopus-indexed documents from 2002 to 2021. Each was manually labeled by educational level. Descriptive statistics and science mapping were conducted to highlight relevant objects and their patterns in the knowledge base.

Findings

This study confirms the increasing trend of microlearning publications over the last two decades, with conference papers dominating the microlearning literature (178 documents, 59.86%). Despite global contributions, a concentrated effort from scholars in 15 countries (22.39%) yielded 68.8% of all documents, while the remaining papers were dispersed across 52 other nations (77.61%). Another significant finding is that most documents pertain to three educational level categories: lifelong learning, higher education and all educational levels. In addition, this research highlights six key themes in the microlearning domain, encompassing (1) Design and evaluation of mobile learning, (2) Microlearning adaptation in MOOCs, (3) Language teaching and learning, (4) Workflow of a microlearning system, (5) Microlearning content design, (6) Health competence and health behaviors. Other aspects analyzed in this study include the most prominent authors, research groups, documents and references.

Originality/value

The finding represents all topics at various educational levels to offer a comprehensive view of the knowledge base.

Details

Journal of Research in Innovative Teaching & Learning, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2397-7604

Keywords

Access

Only Open Access

Year

Last 3 months (4)

Content type

1 – 4 of 4