Search results

1 – 2 of 2
Article
Publication date: 17 May 2024

Cong Doanh Duong, Thi Viet Nga Ngo, The Anh Khuc, Nhat Minh Tran and Thi Phuong Thu Nguyen

Limited knowledge exists regarding the adverse effects of artificial intelligence adoption, including platforms like ChatGPT, on users’ mental well-being. The current research…

Abstract

Purpose

Limited knowledge exists regarding the adverse effects of artificial intelligence adoption, including platforms like ChatGPT, on users’ mental well-being. The current research seeks to adopt the insight from the stressor-strain-outcome paradigm and a moderated mediation model to examine how technology anxiety moderates the direct and indirect relationships between compulsive use of ChatGPT, technostress, and life satisfaction.

Design/methodology/approach

Drawing data from a sample of 2,602 ChatGPT users in Vietnam, PROCESS macro was approached to test the moderated mediation model.

Findings

The findings indicate that compulsive use of ChatGPT exhibited a substantial and positive impact on technostress, while technostress was found to have a negative influence on life satisfaction. Moreover, although compulsive use of ChatGPT did not show a significant direct effect, it indirectly impacts life satisfaction via technostress. Remarkably, technology anxiety was found to significantly moderate both direct and indirect associations between compulsive use of ChatGPT, technostress, and life satisfaction.

Practical implications

Based on the findings of this research, some practical implications are provided.

Originality/value

The research offers a fresh perspective by applying the stressor-strain-outcome perspective to provide empirical evidence on the moderated mediation effects of technology anxiety and technostress on the relationship between compulsive use of ChatGPT and users’ life satisfaction. The research thus sheds new light on artificial intelligence adoption and its effects on users’ mental health.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 4 September 2023

Amani Alabed, Ana Javornik, Diana Gregory-Smith and Rebecca Casey

This paper aims to study the role of self-concept in consumer relationships with anthropomorphised conversational artificially intelligent (AI) agents. First, the authors…

1682

Abstract

Purpose

This paper aims to study the role of self-concept in consumer relationships with anthropomorphised conversational artificially intelligent (AI) agents. First, the authors investigate how the self-congruence between consumer self-concept and AI and the integration of the conversational AI agent into consumer self-concept might influence such relationships. Second, the authors examine whether these links with self-concept have implications for mental well-being.

Design/methodology/approach

This study conducted in-depth interviews with 20 consumers who regularly use popular conversational AI agents for functional or emotional tasks. Based on a thematic analysis and an ideal-type analysis, this study derived a taxonomy of consumer–AI relationships, with self-congruence and self–AI integration as the two axes.

Findings

The findings unveil four different relationships that consumers forge with their conversational AI agents, which differ in self-congruence and self–AI integration. Both dimensions are prominent in replacement and committed relationships, where consumers rely on conversational AI agents for companionship and emotional tasks such as personal growth or as a means for overcoming past traumas. These two relationships carry well-being risks in terms of changing expectations that consumers seek to fulfil in human-to-human relationships. Conversely, in the functional relationship, the conversational AI agents are viewed as an important part of one’s professional performance; however, consumers maintain a low sense of self-congruence and distinguish themselves from the agent, also because of the fear of losing their sense of uniqueness and autonomy. Consumers in aspiring relationships rely on their agents for companionship to remedy social exclusion and loneliness, but feel this is prevented because of the agents’ technical limitations.

Research limitations/implications

Although this study provides insights into the dynamics of consumer relationships with conversational AI agents, it comes with limitations. The sample of this study included users of conversational AI agents such as Siri, Google Assistant and Replika. However, future studies should also investigate other agents, such as ChatGPT. Moreover, the self-related processes studied here could be compared across public and private contexts. There is also a need to examine such complex relationships with longitudinal studies. Moreover, future research should explore how consumers’ self-concept could be negatively affected if the support provided by AI is withdrawn. Finally, this study reveals that in some cases, consumers are changing their expectations related to human-to-human relationships based on their interactions with conversational AI agents.

Practical implications

This study enables practitioners to identify specific anthropomorphic cues that can support the development of different types of consumer–AI relationships and to consider their consequences across a range of well-being aspects.

Originality/value

This research equips marketing scholars with a novel understanding of the role of self-concept in the relationships that consumers forge with popular conversational AI agents and the associated well-being implications.

Details

European Journal of Marketing, vol. 58 no. 2
Type: Research Article
ISSN: 0309-0566

Keywords

1 – 2 of 2