Search results
1 – 10 of over 15000Jenny L. Davis, Daniel B. Shank, Tony P. Love, Courtney Stefanik and Abigail Wilson
Role-taking is a basic social process underpinning much of the structural social psychology paradigm – a paradigm built on empirical studies of human interaction. Yet today, our…
Abstract
Purpose
Role-taking is a basic social process underpinning much of the structural social psychology paradigm – a paradigm built on empirical studies of human interaction. Yet today, our social worlds are occupied by bots, voice assistants, decision aids, and other machinic entities collectively referred to as artificial intelligence (AI). The integration of AI into daily life presents both challenges and opportunities for social psychologists. Through a vignette study, the authors investigate role-taking and gender in human-AI relations.
Methodology
Participants read a first-person narrative attributed to either a human or AI, with varied gender presentation based on a feminine or masculine first name. Participants then infer the narrator's thoughts and feelings and report on their own emotions, producing indicators of cognitive and affective role-taking. The authors supplement results with qualitative analysis from two open-ended survey questions.
Findings
Participants score higher on role-taking measures when the narrator is human versus AI. However, gender dynamics differ between human and AI conditions. When the text is attributed to a human, masculinized narrators elicit stronger role-taking responses than their feminized counterparts, and women participants score higher on role-taking measures than men. This aligns with prior research on gender, status, and role-taking variation. When the text is attributed to an AI, results deviate from established findings and in some cases, reverse.
Research Implications
This first study of human-AI role-taking tests the scope of key theoretical tenets and sets a foundation for addressing group processes in a newly emergent form.
Details
Keywords
Sergio Barile, Clara Bassano, Paolo Piciocchi, Marialuisa Saviano and James Clinton Spohrer
Technology is revolutionizing the management logic of service systems. The increasing use of artificial intelligence (AI), in particular, is challenging interaction between humans…
Abstract
Purpose
Technology is revolutionizing the management logic of service systems. The increasing use of artificial intelligence (AI), in particular, is challenging interaction between humans and machines changing the service systems’ value co-creation configurations and logic. To envision possible future scenarios, this paper aims to reflect upon how the humans’ use of AI technology can impact value co-creation.
Design/methodology/approach
The study is developed, at a conceptual level, using selected elements from managerial and marketing theoretical frameworks interested in value co-creation – Service-Dominant Logic, Service Science and Viable Systems Approach (VSA) – used as interpretative tools to reframe value co-creation in the digital age.
Findings
The interpretative approach adopted and, in particular, the new VSA notion of Intelligence Augmentation (IA), in the perspective of the information variety model, shed new light on value co-creation in the digital age framing a possible “IA effect” that can empower value co-creation in complex decision-making contexts.
Practical implications
The study provides insights useful in the design and management of service systems suggesting a rethinking of the view of AI as a means for mainly increasing the smartness of service systems and a new focus on the enhancement of the human resources contribution to make the service systems wiser.
Originality/value
The paper provides a refocused interpretative view of the interaction between humans and AI that looks at a possible positive impact of the use of AI on humans in terms of augmented decision-making capabilities in conditions of complexity.
Details
Keywords
Ertugrul Uysal, Sascha Alavi and Valéry Bezençon
Anthropomorphism in Artificial Intelligence (AI)-powered devices is being used increasingly frequently in consumer-facing situations (e.g., AI Assistants such as Alexa, virtual…
Abstract
Purpose
Anthropomorphism in Artificial Intelligence (AI)-powered devices is being used increasingly frequently in consumer-facing situations (e.g., AI Assistants such as Alexa, virtual agents in websites, call/chat bots, etc.), and therefore, it is essential to understand anthropomorphism in AI both to understand consequences for consumers and to optimize firms' product development and marketing. Extant literature is fragmented across several domains and is limited in the marketing domain. In this review, we aim to bring together the insights from different fields and develop a parsimonious conceptual framework to guide future research in fields of marketing and consumer behavior.
Methodology
We conduct a review of empirical articles published until November 2021 in Financial Times Top 50 (FT50) journals as well as in 41 additional journals selected across several disciplinary domains: computer science, robotics, psychology, marketing, and consumer behavior.
Findings
Based on literature review and synthesis, we propose a three-step guiding framework for future research and practice on AI anthropomorphism.
Research Implications
Our proposed conceptual framework informs marketing and consumer behavior domains with findings accumulated in other research domains, offers important directions for future research, and provides a parsimonious guide for marketing managers to optimally utilize anthropomorphism in AI to the benefit of both firms and consumers.
Originality/Value
We contribute to the emerging literature on anthropomorphism in AI in three ways. First, we expedite the information flow between disciplines by integrating insights from different fields of inquiry. Second, based on our synthesis of literature, we offer a conceptual framework to organize the outcomes of AI anthropomorphism in a tidy and concise manner. Third, based on our review and conceptual framework, we offer key directions to guide future research endeavors.
Details
Keywords
Yupeng Mou, Tianjie Xu and Yanghong Hu
Artificial intelligence (AI) has a large number of applications at the industry and user levels. However, AI's uniqueness neglect is becoming an obstacle in the further…
Abstract
Purpose
Artificial intelligence (AI) has a large number of applications at the industry and user levels. However, AI's uniqueness neglect is becoming an obstacle in the further application of AI. Based on the theory of innovation resistance, this paper aims to explore the effect of AI's uniqueness neglect on consumer resistance to AI.
Design/methodology/approach
The authors tested four hypothesis across four studies by conducting lab experiments. Study 1 used a questionnaire to verify the hypothesis that AI's uniqueness neglect leads to consumer resistance to AI; Studies 2 focused on the role of human–AI interaction trust as an underlying driver of resistance to medical AI. Study 3–4 provided process evidence by way of a measured moderator, testing whether participants with a greater sense of non-verbal human–AI communication are more reluctant to have consumer resistance to AI.
Findings
The authors found that AI's uniqueness neglect increased users' resistance to AI. This occurs because the uniqueness neglect of AI hinders the formation of interaction trust between users and AI. The study also found that increasing the gaze behavior of AI and increasing the physical distance in the interaction can alleviate the effect of AI's uniqueness neglect on consumer resistance to AI.
Originality/value
This paper explored the effect of AI's uniqueness neglect on consumer resistance to AI and uncovered human–AI interaction trust as a mediator for this effect and gaze behavior and physical distance as moderators for this effect.
Details
Keywords
Ahmad Arslan, Cary Cooper, Zaheer Khan, Ismail Golgeci and Imran Ali
This paper aims to specifically focus on the challenges that human resource management (HRM) leaders and departments in contemporary organisations face due to close interaction…
Abstract
Purpose
This paper aims to specifically focus on the challenges that human resource management (HRM) leaders and departments in contemporary organisations face due to close interaction between artificial intelligence (AI) (primarily robots) and human workers especially at the team level. It further discusses important potential strategies, which can be useful to overcome these challenges based on a conceptual review of extant research.
Design/methodology/approach
The current paper undertakes a conceptual work where multiple streams of literature are integrated to present a rather holistic yet critical overview of the relationship between AI (particularly robots) and HRM in contemporary organisations.
Findings
We highlight that interaction and collaboration between human workers and robots is visible in a range of industries and organisational functions, where both are working as team members. This gives rise to unique challenges for HRM function in contemporary organisations where they need to address workers' fear of working with AI, especially in relation to future job loss and difficult dynamics associated with building trust between human workers and AI-enabled robots as team members. Along with these, human workers' task fulfilment expectations with their AI-enabled robot colleagues need to be carefully communicated and managed by HRM staff to maintain the collaborative spirit, as well as future performance evaluations of employees. The authors found that organisational support mechanisms such as facilitating environment, training opportunities and ensuring a viable technological competence level before organising human workers in teams with robots are important. Finally, we found that one of the toughest challenges for HRM relates to performance evaluation in teams where both humans and AI (including robots) work side by side. We referred to the lack of existing frameworks to guide HRM managers in this concern and stressed the possibility of taking insights from the computer gaming literature, where performance evaluation models have been developed to analyse humans and AI interactions while keeping the context and limitations of both in view.
Originality/value
Our paper is one of the few studies that go beyond a rather general or functional analysis of AI in the HRM context. It specifically focusses on the teamwork dimension, where human workers and AI-powered machines (robots) work together and offer insights and suggestions for such teams' smooth functioning.
Details
Keywords
One-Ki Daniel Lee, Ramakrishna Ayyagari, Farzaneh Nasirian and Mohsen Ahmadian
The rapid growth of artificial intelligence (AI)-based voice-assistant systems (VASs) has created many opportunities for individuals to use VASs for various purposes in their…
Abstract
Purpose
The rapid growth of artificial intelligence (AI)-based voice-assistant systems (VASs) has created many opportunities for individuals to use VASs for various purposes in their daily lives. However, traditional quality success factors, such as information quality and system quality, may not be sufficient in explaining the adoption and use of AI-based VASs. This study aims to propose interaction quality as an additional, yet more important quality measure that leads to trust in an AI-based VAS and its adoption.
Design/methodology/approach
The authors propose a research model that highlights the importance of interaction quality and trust as underlying mechanisms in the adoption of AI-based VASs. Based on survey methodology and data from 221 respondents, the proposed research model is tested with a partial least squares approach.
Findings
The results suggest that interaction quality and trust are critical factors influencing the adoption of AI-based VASs. The findings also indicate that the impacts of traditional quality factors (i.e. information quality and system quality) occur through interaction quality in the context of AI-based VASs.
Originality/value
This research adds interaction quality as a new quality factor to the traditional quality factors in the information systems success model. Further, given the interactive nature of VASs, the authors use social response theory to explain the importance of the trust mechanism when individuals interact with AI-based VASs.
Contribution to Impact
Details
Keywords
The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and…
Abstract
Purpose
The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and implementation.
Design/methodology/approach
The research employed two experimental designs and one pilot study to investigate the ethical and moral implications of different levels of AI implementation in the hospitality industry, the intersection of self-congruency and ethical considerations when AI replaces human service providers and the impact of psychological distance associated with AI on individuals' ethical and moral considerations. These research methods included surveys and experimental manipulations to gather and analyze relevant data.
Findings
Findings provide valuable insights into the ethical and moral dimensions of AI implementation, the influence of self-congruency on ethical considerations and the role of psychological distance in individuals’ ethical evaluations. They contribute to the development of guidelines and practices for the responsible and ethical implementation of AI in various industries, including the hospitality sector.
Practical implications
The study highlights the importance of exercising rigorous ethical-moral AI hiring and implementation practices to ensure AI principles and enforcement operations in the restaurant industry. It provides practitioners with useful insights into how AI-robotization can improve ethical and moral standards.
Originality/value
The study contributes to the literature by providing insights into the ethical and moral implications of AI service robots in the hospitality industry. Additionally, the study explores the relationship between psychological distance and acceptance of AI-intervened service, which has not been extensively studied in the literature.
Details
Keywords
Sara H. Hsieh and Crystal T. Lee
Artificially intelligent (AI) assistant-enabled smart speaker not only can provide assistance by navigating the massive amount of product and brand information on the internet but…
Abstract
Purpose
Artificially intelligent (AI) assistant-enabled smart speaker not only can provide assistance by navigating the massive amount of product and brand information on the internet but also can facilitate two-way conversations with individuals, thus resembling a human interaction. Although smart speakers have substantial implications for practitioners, the knowledge of the underlying psychological factors that drive continuance usage remains limited. Drawing on social response theory and the technology acceptance model, this study aims to elucidate the adoption process of smart speakers.
Design/methodology/approach
A field survey of 391 smart speaker users were obtained. Partial least squares structural equation modeling was used to analyze the data.
Findings
Media richness (social cues) and parasocial interactions (social role) are key determinants affecting the establishment of trust, perceived usefulness and perceived ease of use, which, in turn, affect attitude, continuance usage intentions and online purchase intentions through AI assistants.
Originality/value
AI assistant-enabled smart speakers are revolutionizing how people interact with smart products. Studies of smart speakers have mainly focused on functional or technical perspectives. This study is the first to propose a comprehensive model from both functional and social perspectives of continuance usage intention of the smart speaker and online purchase intentions through AI assistants.
Details
Keywords
Crystal T. Lee, Ling-Yen Pan and Sara H. Hsieh
This study investigates the determinants of effective human and artificial intelligence (AI) relationship-building strategies for brands. It explores the antecedents and…
Abstract
Purpose
This study investigates the determinants of effective human and artificial intelligence (AI) relationship-building strategies for brands. It explores the antecedents and consequences of consumers' interactant satisfaction with communication and identifies ways to enhance consumer purchase intention via AI chatbot promotion.
Design/methodology/approach
Microsoft Xiaoice served as the focal AI chatbot, and 331 valid samples were obtained. A two-stage structural equation modeling-artificial neural network approach was adopted to verify the proposed theoretical model.
Findings
Regarding the IQ (intelligence quotient) and EQ (emotional quotient) of AI chatbots, the multi-dimensional social support model helps explain consumers' interactant satisfaction with communication, which facilitates affective attachment and purchase intention. The results also show that chatbots should emphasize emotional and esteem social support more than informational support.
Practical implications
Brands should focus more on AI chatbots' emotional and empathetic responses than functional aspects when designing dialogue content for human–AI interactions. Well-designed AI chatbots can help marketers develop effective brand promotion strategies.
Originality/value
This research enriches the human–AI interaction literature by adopting a multi-dimensional social support theoretical lens that can enhance the interactant satisfaction with communication, affective attachment and purchase intention of AI chatbot users.
Details
Keywords
Aihui Chen, Mengqi Xiang, Mingyu Wang and Yaobin Lu
The purpose of this paper was to investigate the relationships among the intellectual ability of artificial intelligence (AI), cognitive emotional processes and the positive and…
Abstract
Purpose
The purpose of this paper was to investigate the relationships among the intellectual ability of artificial intelligence (AI), cognitive emotional processes and the positive and negative reactions of human members. The authors also examined the moderating role of AI status in teams.
Design/methodology/approach
The authors designed an experiment and recruited 120 subjects who were randomly distributed into one of three groups classified by the upper, middle and lower organization levels of AI in the team. The findings in this study were derived from subjects’ self-reports and their performance in the experiment.
Findings
Regardless of the position held by AI, human members believed that its intelligence level is positively correlated with dependence behavior. However, when the AI and human members are at the same level, the higher the intelligence of AI, the more likely it is that its direct interaction with team members will lead to conflicts.
Research limitations/implications
This paper only focuses on human–AI harmony in transactional work in hybrid teams in enterprises. As AI applications permeate, it should be considered whether the findings can be extended to a broader range of AI usage scenarios.
Practical implications
These results are helpful for understanding how to improve team performance in light of the fact that team members have introduced AI into their enterprises in large quantities.
Originality/value
This study contributes to the literature on how the intelligence level of AI affects the positive and negative behaviors of human members in hybrid teams. The study also innovatively introduces “status” into hybrid organizations.
Details