Search results
1 – 10 of over 6000Sara H. Hsieh and Crystal T. Lee
Artificially intelligent (AI) assistant-enabled smart speaker not only can provide assistance by navigating the massive amount of product and brand information on the internet but…
Abstract
Purpose
Artificially intelligent (AI) assistant-enabled smart speaker not only can provide assistance by navigating the massive amount of product and brand information on the internet but also can facilitate two-way conversations with individuals, thus resembling a human interaction. Although smart speakers have substantial implications for practitioners, the knowledge of the underlying psychological factors that drive continuance usage remains limited. Drawing on social response theory and the technology acceptance model, this study aims to elucidate the adoption process of smart speakers.
Design/methodology/approach
A field survey of 391 smart speaker users were obtained. Partial least squares structural equation modeling was used to analyze the data.
Findings
Media richness (social cues) and parasocial interactions (social role) are key determinants affecting the establishment of trust, perceived usefulness and perceived ease of use, which, in turn, affect attitude, continuance usage intentions and online purchase intentions through AI assistants.
Originality/value
AI assistant-enabled smart speakers are revolutionizing how people interact with smart products. Studies of smart speakers have mainly focused on functional or technical perspectives. This study is the first to propose a comprehensive model from both functional and social perspectives of continuance usage intention of the smart speaker and online purchase intentions through AI assistants.
Details
Keywords
This study aims to determine how the attitudes toward artificial intelligence (AI) of religious tourists affect their AI self-efficacy and their engagement in AI. This study…
Abstract
Purpose
This study aims to determine how the attitudes toward artificial intelligence (AI) of religious tourists affect their AI self-efficacy and their engagement in AI. This study specifically intends to investigate the mediating role of AI self-efficacy in the relationship between attitudes toward AI and the engagement in AI of religious tourists. This study also seeks to identify the role of AI assistant use as a moderator in the relationship between attitudes toward AI and AI self-efficacy.
Design/methodology/approach
The data used in this study was gathered from a sample of 282 religious tourists who had just visited Karbala, central Iraq. Purposive sampling, which comprises a focused and systematic approach to data collection, was used after carefully assessing the distinctive characteristics and properties of the research population.
Findings
The results showed that attitudes to AI had a noticeable impact on AI self-efficacy, which, in turn, exerted a positive impact on engagement with AI. In addition, the use of AI assistants acted to positively moderate AI self-efficacy in terms of mediating the link between attitudes to AI and AI engagement.
Originality/value
The distinctive focus on religious tourists adds an original perspective to the existing literature, shedding light on how their attitudes towards AI impact not only their self-efficacy but also their engagement in dealing with AI. In addition, this study delves into the moderating role of AI assistant use, introducing a unique factor in understanding the complex interplay between attitudes, self-efficacy, and engagement in the context of religious tourism. The selection of Karbala, central Iraq, as this study site further adds originality, providing insights into a specific religious and cultural context.
Details
Keywords
Palima Pandey and Alok Kumar Rai
The present study aimed to explore the consequences of perceived authenticity in artificial intelligence (AI) assistants and develop a serial-mediation architecture specifying…
Abstract
Purpose
The present study aimed to explore the consequences of perceived authenticity in artificial intelligence (AI) assistants and develop a serial-mediation architecture specifying causation of loyalty in human–AI relationships. It intended to assess the predictive power of the developed model based on a training-holdout sample procedure. It further attempted to map and examine the predictors of loyalty, strengthening such relationship.
Design/methodology/approach
Partial least squares structural equation modeling (PLS-SEM) based on bootstrapping technique was employed to examine the higher-order effects pertaining to human–AI relational intricacies. The sample size of the study comprised of 412 AI assistant users belonging to millennial generation. PLS-Predict algorithm was used to assess the predictive power of the model, while importance-performance analysis was executed to assess the effectiveness of the predictor variables on a two-dimensional map.
Findings
A positive relationship was found between “Perceived Authenticity” and “Loyalty,” which was serially mediated by “Perceived-Quality” and “Animacy” in human–AI relational context. The construct “Loyalty” remained a significant predictor of “Emotional-Attachment” and “Word-of-Mouth.” The model possessed high predictive power. Mapping analysis delivered contradictory result, indicating “authenticity” as the most significant predictor of “loyalty,” but the least effective on performance dimension.
Practical implications
The findings of the study may assist marketers to understand the relevance of AI authenticity and examine the critical behavioral consequences underlying customer retention and extension strategies.
Originality/value
The study is pioneer to introduce a hybrid AI authenticity model and establish its predictive power in explaining the transactional and communal view of human reciprocation in human–AI relationship. It exclusively provided relative assessment of the predictors of loyalty on a two-dimensional map.
Details
Keywords
Sara H. Hsieh and Crystal T. Lee
The growing integration of artificial intelligence (AI) assistants and voice assistants provides a platform for AI to enter consumers’ everyday lives. As these voice assistants…
Abstract
Purpose
The growing integration of artificial intelligence (AI) assistants and voice assistants provides a platform for AI to enter consumers’ everyday lives. As these voice assistants become ubiquitous, their widespread adoption underscores the need to understand how to create voice assistants that can naturally interact with and support users. Grounded in the stereotype content model from social psychology, this study aims to investigate the influence of perceived humanness and personality on building trust and continuous usage intentions in voice assistants. Specifically, a fresh perspective examining the determining factors that shape personality trait perceptions of competence and warmth in voice assistants is proposed.
Design/methodology/approach
An online survey of 457 participants and structural equation modeling is conducted to validate the research model.
Findings
Anthropomorphism, social presence and interactivity drive perceived warmth, whereas performance and effort expectations drive perceived competence. Perceived competence and perceived warmth together positively affect users’ trust in voice assistants, leading to a higher likelihood of continuous usage intentions.
Originality/value
This research provides profound theoretical contributions to the emerging field of human-AI interaction and offer practical implications for marketers aiming to leverage voice assistant personalities to build trusted and long-lasting interactions.
Details
Keywords
Abdul Wahid Khan and Abhishek Mishra
This study aims to conceptualize the relationship of perceived artificial intelligence (AI) credibility with consumer-AI experiences. With the widespread deployment of AI in…
Abstract
Purpose
This study aims to conceptualize the relationship of perceived artificial intelligence (AI) credibility with consumer-AI experiences. With the widespread deployment of AI in marketing and services, consumer-AI experiences are common and an emerging research area in marketing. Various factors affecting consumer-AI experiences have been studied, but one crucial factor – perceived AI credibility is relatively underexplored which the authors aim to envision and conceptualize.
Design/methodology/approach
This study employs a conceptual development approach to propose relationships among constructs, supported by 34 semi-structured consumer interviews.
Findings
This study defines AI credibility using source credibility theory (SCT). The conceptual framework of this study shows how perceived AI credibility positively affects four consumer-AI experiences: (1) data capture, (2) classification, (3) delegation, and (4) social interaction. Perceived justice is proposed to mediate this effect. Improved consumer-AI experiences can elicit favorable consumer outcomes toward AI-enabled offerings, such as the intention to share data, follow recommendations, delegate tasks, and interact more. Individual and contextual moderators limit the positive effect of perceived AI credibility on consumer-AI experiences.
Research limitations/implications
This study contributes to the emerging research on AI credibility and consumer-AI experiences that may improve consumer-AI experiences. This study offers a comprehensive model with consequences, mechanism, and moderators to guide future research.
Practical implications
The authors guide marketers with ways to improve the four consumer-AI experiences by enhancing consumers' perceived AI credibility.
Originality/value
This study uses SCT to define AI credibility and takes a justice theory perspective to develop the conceptual framework.
Details
Keywords
Amani Alabed, Ana Javornik, Diana Gregory-Smith and Rebecca Casey
This paper aims to study the role of self-concept in consumer relationships with anthropomorphised conversational artificially intelligent (AI) agents. First, the authors…
Abstract
Purpose
This paper aims to study the role of self-concept in consumer relationships with anthropomorphised conversational artificially intelligent (AI) agents. First, the authors investigate how the self-congruence between consumer self-concept and AI and the integration of the conversational AI agent into consumer self-concept might influence such relationships. Second, the authors examine whether these links with self-concept have implications for mental well-being.
Design/methodology/approach
This study conducted in-depth interviews with 20 consumers who regularly use popular conversational AI agents for functional or emotional tasks. Based on a thematic analysis and an ideal-type analysis, this study derived a taxonomy of consumer–AI relationships, with self-congruence and self–AI integration as the two axes.
Findings
The findings unveil four different relationships that consumers forge with their conversational AI agents, which differ in self-congruence and self–AI integration. Both dimensions are prominent in replacement and committed relationships, where consumers rely on conversational AI agents for companionship and emotional tasks such as personal growth or as a means for overcoming past traumas. These two relationships carry well-being risks in terms of changing expectations that consumers seek to fulfil in human-to-human relationships. Conversely, in the functional relationship, the conversational AI agents are viewed as an important part of one’s professional performance; however, consumers maintain a low sense of self-congruence and distinguish themselves from the agent, also because of the fear of losing their sense of uniqueness and autonomy. Consumers in aspiring relationships rely on their agents for companionship to remedy social exclusion and loneliness, but feel this is prevented because of the agents’ technical limitations.
Research limitations/implications
Although this study provides insights into the dynamics of consumer relationships with conversational AI agents, it comes with limitations. The sample of this study included users of conversational AI agents such as Siri, Google Assistant and Replika. However, future studies should also investigate other agents, such as ChatGPT. Moreover, the self-related processes studied here could be compared across public and private contexts. There is also a need to examine such complex relationships with longitudinal studies. Moreover, future research should explore how consumers’ self-concept could be negatively affected if the support provided by AI is withdrawn. Finally, this study reveals that in some cases, consumers are changing their expectations related to human-to-human relationships based on their interactions with conversational AI agents.
Practical implications
This study enables practitioners to identify specific anthropomorphic cues that can support the development of different types of consumer–AI relationships and to consider their consequences across a range of well-being aspects.
Originality/value
This research equips marketing scholars with a novel understanding of the role of self-concept in the relationships that consumers forge with popular conversational AI agents and the associated well-being implications.
Details
Keywords
Mateusz Tomasz Kot and Grzegorz Leszczyński
Interactions are fundamental for successful relationships and stable cooperation in a business-to-business market. The main assumption in research on interactions, so obvious that…
Abstract
Purpose
Interactions are fundamental for successful relationships and stable cooperation in a business-to-business market. The main assumption in research on interactions, so obvious that usually not stated by researchers, is that they are set between humans. The development of artificial intelligence forces the re-examination of this assumption. This paper aims to conceptualize business virtual assistants (BVAs), a type of intelligent agent, as either a boundary object or an actor within business interactions.
Design/methodology/approach
Reference is made to the literature on business interactions, boundary objects and identity attribution to problematize the process of interpretation through which BVA obtains an identity. The ARA model and the model of interaction process is used to create a theoretical framework.
Findings
This paper contributes to the literature on business interactions, and to the core of the IMP discussion, in three aspects. The first provides a framework to understand the phenomenon of an artificial entity as an interlocutor in business interactions. While doing that a new type of entity, BVA, is introduced. The second contribution is the exploration and augmentation of the concept of a business actor. The third calls attention to BVA as a boundary object. These issues are seen as essential to move forward the discussion about the meaning of business interaction in the near future.
Originality/value
This paper conceptualizes the presence of a new entity – BVA – in the business landscape.
Details
Keywords
Ansgar Zerfass, Jens Hagelstein and Ralph Tench
Artificial intelligence (AI) might change the communication profession immensely, but the academic discourse is lacking an investigation of the perspective of practitioners on…
Abstract
Purpose
Artificial intelligence (AI) might change the communication profession immensely, but the academic discourse is lacking an investigation of the perspective of practitioners on this. This article addresses this research gap. It offers a literature overview and reports about an empirical study on AI in communications, presenting first insights on how professionals in the field assess the technology.
Design/methodology/approach
A quantitative cross-national study among 2,689 European communication practitioners investigated four research questions: RQ1 – How much do professionals know about AI and to what extent are they already using AI technologies in their everyday lives? RQ2 – How do professionals rate the impact of AI on communication management? RQ3 – Which challenges do professionals identify for implementing AI in communication management? RQ4 – Which risks do they perceive?
Findings
Communication professionals revealed a limited understanding of AI and expected the technology to impact the profession as a whole more than the way their organisations or themselves work. Lack of individual competencies and organisations struggling with different levels of competency and unclear responsibilities were identified as key challenges and risks.
Research limitations/implications
The results highlight the need for communication managers to educate themselves and their teams about the technology and to identify the implementation of AI as a leadership issue.
Originality/value
The article offers the first cross-national quantitative study on AI in communication management. It presents valuable empirical insights on a trending topic in the discipline, highly relevant for both academics and practitioners.
Details
Keywords
Ziheng Wang, Jiachen Wang, Chengyu Tian, Ahsan Ali and Xicheng Yin
As the role of AI on human teams shifts from a tool to a teammate, the implementation of AI teammates into knowledge-intensive crowdsourcing (KI-C) contest teams represents a…
Abstract
Purpose
As the role of AI on human teams shifts from a tool to a teammate, the implementation of AI teammates into knowledge-intensive crowdsourcing (KI-C) contest teams represents a forward-thinking and feasible solution to improve team performance. Since contest teams are characterized by virtuality, temporality, competitiveness, and skill diversity, the human-AI interaction mechanism underlying conventional teams is no longer applicable. This study empirically analyzes the effects of AI teammate attributes on human team members’ willingness to adopt AI in crowdsourcing contests.
Design/methodology/approach
A questionnaire-based online experiment was designed to perform behavioral data collection. We obtained 206 valid anonymized samples from 28 provinces in China. The Ordinary Least Squares (OLS) model was used to test the proposed hypotheses.
Findings
We find that the transparency and explainability of AI teammates have mediating effects on human team members’ willingness to adopt AI through trust. Due to the different tendencies exhibited by members with regard to three types of cognitive load, nonlinear U-shaped relationships are observed among explainability, cognitive load, and willingness to adopt AI.
Originality/value
We provide design ideas for human-AI team mechanisms in KI-C scenarios, and rationally explain how the U-shaped relationship between AI explainability and cognitive load emerges.
Details