Search results

1 – 4 of 4
Article
Publication date: 10 May 2021

Olya Kudina and Mark Coeckelbergh

This paper aims to show how the production of meaning is a matter of people interacting with technologies, throughout their appropriation and in co-performances. The researchers…

Abstract

Purpose

This paper aims to show how the production of meaning is a matter of people interacting with technologies, throughout their appropriation and in co-performances. The researchers rely on the case of household-based voice assistants that endorse speaking as a primary mode of interaction with technologies. By analyzing the ethical significance of voice assistants as co-producers of moral meaning intervening in the material and socio-cultural space of the home, the paper invites their informed and critical use as a form of (re-)empowerment while acknowledging their productive role in human values.

Design/methodology/approach

This paper presents an empirically informed philosophical analysis. Using the conceptual frameworks of technological appropriation and human–technological performances, while drawing on the interviews with voice assistants’ users and literature studies, this paper unravels the meaning-making processes in relation to these technologies in the household use. It additionally draws on a Wittgensteinian perspective to attend to the productive role of language and link to wider cultural meanings.

Findings

By combining two approaches, appropriation and technoperformances, and analyzing the themes of privacy, power and knowledge, the paper shows how voice assistants help to shape a specific moral subject: embodied in space and made as it performatively responds to the device and makes sense of it together with others.

Originality/value

The researchers show how through making sense of technologies in appropriation and performatively responding to them, people can change and intervene in the power structures that technologies suggest.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 8 November 2023

Miriam Alzate, Marta Arce Urriza and Monica Cortiñas

This study aims to understand the extent of privacy concerns regarding voice-activated personal assistants (VAPAs) on Twitter. It investigates three key areas: (1) the effect of…

Abstract

Purpose

This study aims to understand the extent of privacy concerns regarding voice-activated personal assistants (VAPAs) on Twitter. It investigates three key areas: (1) the effect of privacy-related press coverage on public sentiment and discussion volume; (2) the comparative negativity of privacy-focused conversations versus general conversations; and (3) the specific privacy-related topics that arise most frequently and their impact on sentiment and discussion volume.

Design/methodology/approach

A dataset of 441,427 tweets mentioning Amazon Alexa, Google Assistant, and Apple Siri from July 1, 2019 to June 30, 2021 were collected. Privacy-related press coverage has also been monitored. Sentiment analysis was conducted using the dictionary-based software LIWC and VADER, whereas text mining packages in R were used to identify privacy-related issues.

Findings

Negative privacy-related news significantly increases both negativity and volume in Twitter conversations, whereas positive news only boosts volume. Privacy-related tweets were notably more negative than general tweets. Specific keywords were found to either increase or decrease the sentiment and discussion volume. Additionally, a temporal evolution in sentiment, with general attitudes toward VAPAs becoming more positive, but privacy-specific discussions becoming more negative was observed.

Originality/value

This research augments the existing online privacy literature by employing text mining methodologies to gauge consumer sentiments regarding privacy concerns linked to VAPAs, a topic currently underexplored. Furthermore, this research uniquely integrates established theories from privacy calculus and social contract theory to deepen our analysis.

Details

Journal of Research in Interactive Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-7122

Keywords

Article
Publication date: 14 April 2023

Jennifer Huh, Hye-Young Kim and Garim Lee

This study examines how the locus of agency of brands' artificial intelligence (AI)–powered voice assistants (VAs) could lead to brand loyalty through perceived control, flow and…

Abstract

Purpose

This study examines how the locus of agency of brands' artificial intelligence (AI)–powered voice assistants (VAs) could lead to brand loyalty through perceived control, flow and consumer happiness under the moderating influences of brand image and voice congruity.

Design/methodology/approach

This study conducted a 2 (locus of agency: high vs. low) by 2 (brand image-voice congruity: congruent vs. incongruent) between-subjects experimental design. MANOVA, ANOVA and structural equation modeling (SEM) were conducted to test the hypothesized model.

Findings

ANOVA results revealed that human-centric (vs. machine-centric) agency led to higher perceived control. The interaction effect was significant, indicating the importance of congruency between brand image and VAs' voices. SEM results confirmed that perceived control predicted brand loyalty fully mediated by flow experience and consumer happiness.

Originality/value

This study provides evidence that the positive technology paradigm could carve out a new path in existing literature on AI-powered devices by showing the potential of a smart device as a tool for improving consumer–brand relationships and enriching consumers' well-being.

Details

Journal of Research in Interactive Marketing, vol. 17 no. 5
Type: Research Article
ISSN: 2040-7122

Keywords

Open Access
Article
Publication date: 10 October 2021

Anisa Aini Arifin and Thomas Taro Lennerfors

Voice assistant (VA) technology is one of the fastest-growing artificial intelligence applications at present. However, the burgeoning scholarship argues that there are ethical…

2156

Abstract

Purpose

Voice assistant (VA) technology is one of the fastest-growing artificial intelligence applications at present. However, the burgeoning scholarship argues that there are ethical challenges relating to this new technology, not the least related to privacy, which affects the technology’s acceptance. Given that the media impacts public opinion and acceptance of VA and that there are no studies on media coverage of VA, the study focuses on media coverage. In addition, this study aims to focus on media coverage in Indonesia, a country that has been underrepresented in earlier research.

Design/methodology/approach

The authors used critical discourse analysis of media texts, focusing on three levels (text, discourse practice and social practice) to study how VA technology was discussed in the Indonesian context and what power relations frame the representation. In total, 501 articles were collected from seven national media in Indonesia from 2010 to 2020 and the authors particularly focus on the 45 articles that concern ethics.

Findings

The ethical topics covered are gender issues, false marketing, ethical wrongdoing, ethically positive effects, misuse, privacy and security. More importantly, when they are discussed, they are presented as constituting no real critical problem. Regarding discursive practices, the media coverage is highly influenced by foreign media and most of the articles are directed to well-educated Indonesians. Finally, regarding social practices, the authors hold that the government ideology of technological advancement is related to this positive portrayal of VAs.

Originality/value

First, to provide the first media discourse study about ethical issues of VAs. Second, to provide insights from a non-Western context, namely, Indonesia, which is underrepresented in the research on ethics of VAs.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

1 – 4 of 4