A study of employee acceptance of artificial intelligence technology

Youngkeun Choi (Sangmyung University, Seoul, Korea)

European Journal of Management and Business Economics

ISSN: 2444-8494

Article publication date: 30 July 2021

Issue publication date: 20 September 2021

7025

Abstract

Purpose

This study aims to reveal the role of artificial intelligence (AI) in the context of a front-line service meeting to understand how users accept AI technology-enabled service.

Design/methodology/approach

This study collected 454 Korean employees through online survey methods and used hierarchical regression to test the hypothesis empirically.

Findings

In the results, first, clarity of user and AI's roles, user's motivation to adopt AI-based technology and user's ability in the context of the adoption of AI-based technology increases their willingness to accept AI technology. Second, privacy concerns related to the use of AI-based technology weakens the relationship between role clarity and user's willingness to accept AI technology. And, trust related to the use of AI-based technology strengthens the relationship between ability and user's willingness to accept AI technology.

Originality/value

This study is the first one to reveal the role of AI in the context of a front-line service meeting to understand how users accept AI technology-enabled service.

摘要

研究目的

本研究旨在顯示在前線服務會議的情況下人工智能所扮演的角色,以便了解使用者如何接受人工智能科技化服務。

研究的設計/方法/理念

研究以網上問卷調查方式取得454名韓國僱員的數據,並使用層次迴歸分析,對假設進行以經驗為依據的測試。

研究結果

研究結果首先顯示、增強使用者願意接受人工智能科技的因素包括使用者與人工智能兩者角色的清晰度、使用者使用基於人工智能的科技的積極性、以及在應用基於人工智能科技的情況下使用者的能力。其次,與使用基於人工智能的科技有關的私隱問題會削弱角色清晰度與使用者是否願意接受人工智能科技之間的關係。而且,對使用基於人工智能的科技的信任會強化有關的能力與使用者是否樂意使用基於人工智能的科技之間的關係。

研究的原創性/價值

這是首個研究、去顯示在前線服務會議的情況下人工智能所扮演的角色,以便了解使用者如何接受人工智能科技化服務。

Keywords

Citation

Choi, Y. (2021), "A study of employee acceptance of artificial intelligence technology", European Journal of Management and Business Economics, Vol. 30 No. 3, pp. 318-330. https://doi.org/10.1108/EJMBE-06-2020-0158

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Youngkeun Choi

License

Published in European Journal of Management and Business Economics. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Employee self-service (ESS) technology is currently an open innovation of particular interest in the human resource management context because of anticipated cost savings and other efficiency-related benefits (Giovanis et al., 2019; van Tonder et al., 2020). It is a class of web-based technology that allows employees and managers to conduct much of their own data management and transaction processing rather than relying on human resource (HR) or administrative staff to perform these duties (Marler and Dulebohn, 2005). ESS technology can allow employees to update personal information, change their benefits selections or register for training. Shifting such duties to the individual employee enables the organization to devote fewer specialized resources to these activities, often allowing HR to focus on more strategic functions. Despite the intended benefits, the implementation of ESS technology poses many challenges. Because ESS technology functionality is typically not associated with the core functions of professional employees' jobs, these employees may be less motivated to learn and use the ESS technology (Brown, 2003; Marler and Dulebohn, 2005). However, the full adoption of ESS technology is necessary to realize the intended benefits and recoup the significant investments in technology. The history of technology has shown that there is much hype about new technologies, and after the initial inflated expectations, the trough of disillusionment usually follows (Gartner, 2016). Due to trade press and social media posts extolling the virtues of new technologies, managers are keen to jump on a new technology rollercoaster and adopt technological solutions without considering whether they are worth the effort and justify their mystique/novelty.

Artificial intelligence (AI) is an example of technology that receives much attention worldwide in the media, academia and politics (Zhai et al., 2020; Dhamija and Bag, 2020). However, international readers' attitudes toward AI range from a positive assessment of human physical labor and new business opportunities (Frank et al., 2017) to a fear of making humans obsolete in a fully robotic society (Leonhard, 2016). Therefore, it is essential to understand the good deeds of AI-based ESS acceptance to increase the chances of success with the introduction of AI-based ESS. However, few researchers have examined how employees adopt AI-based ESS.

For this research gap, this study takes a closer look at the employees' perspective on how and why they embrace a narrow, business-based AI application when service occurs. Therefore, this study presents a conceptual framework based on previous reviews, practices and theories to identify the role of AI in the context of service encounters and explain the employee acceptance of AI in service research. This framework extends a range of AI beyond conventional configuration and self-service technology acceptance theories to include AI-specific variables such as privacy concerns and trust. A process model, organizing salient variables contributing to employee reaction to the introduction of technology to the service encounter, is proposed, and hypotheses testing the relationships between and among these variables are developed. This study concludes with research issues related to the framework that serve as catalysts for future research. It will be the first study to reveal the role of AI at a front-line service conference to understand how users accept services based on AI technology.

2. Theoretical background and hypothesis development

In service sectors, this study focuses on understanding and theoretically explaining the user acceptance of AI. Previous studies have experimentally investigated the antecedent of self-service technology (SST) adoption and include critical variables in this theoretical framework in the model (Wu and Wu, 2019; Wang et al., 2019; Kelly et al., 2019). In their work, the user adoption of Meuter et al. SST is user clarity (Do you know how to use and how to perform SST?), and motivation (Why use SST to induce the user to try?), and capabilities (Do you have the resources and ability to use SST?). This core configuration is influenced by the nature of the technology itself and by the user's differences. Later, the meta-analysis of SST acceptance explained the complexity of the variables affecting SST acceptance (Blut et al., 2016). In addition to what we already know about SST acceptance, this study believes that the acceptance of AI in service meetings depends on other AI-specific variables other than those traditionally studied in SST studies. This set of variables includes privacy issues, technology and trust in the company, and awareness of the horror of the technology.

2.1 Core construct

Unlike SST, AI-based technology can also act as an independent agent, whether users are aware of AI behavior (Hoffman and Novak, 2017; Upadhyay and Khandelwal, 2019). For example, Google's spam filter, one of AI's first applications, detects and blocks 99.9% of spam and phishing messages without user input (Lardinois, 2017). Facebook recently introduced an AI-based suicide prevention tool that provides support, such as a proposal to surprise users who express suicidal thoughts, contact friends or family members, contact helplines and provide information on available help resources (Rosen, 2017). The concept of role clarity should be expanded to include clarity about the role of users and AI in the service process. During the access to AI support technology, users need to understand that both sides contribute to joint production services. The clarity of a role is remarkable from two perspectives. (1) Establishing responsibility sharing in joint services and (2) promoting user confidence in technology through transparency.

It is up to two actors (user and AI) to perform the part according to the design to achieve the desired service results. Role clarity is essential to ensure the successful integration of AI inputs with users. It ensures that the user understands the steps AI performs to design the service delivery steps and to provide seamless service performance. Misunderstanding or lack of role clarity can result in undesirable and tragic consequences when the steak is exceptionally high. For example, in 2013, the Asiana Airlines crash in San Francisco was a disastrous result of insufficient role clarity. The pilot, who relies on the plane's autopilot, expected the automatic control system to come out of its idle position on its own when the plane begins to lose speed. Users can be involved in AI but lack the role clarity when AI appears in the same context as self-driving cars. What activities does an AI-enabled vehicle carry out, and what does the user do? Role clarity can also indicate transparency about the nature of a meeting, which forms the basis of trust (Hengstler et al., 2016). Because AI can act as an independent agent, the level of transparency in AI roles in meetings can affect user confidence in the technology. Failure to fully disclose the role of the AI agent and its behavior during and after the meeting may erode user confidence in the technology and service providers.

Therefore, role clarity can include questions about the data that AI collects during its interactions and how it uses the data during and after its occurrence. Amazon sent news when its criminal investigation ordered it to submit audio recordings made with personal echo devices as evidence (Heater, 2017). Many users were surprised to learn that their Alexa recorded and stored audio even if the owner of the device was not activated. Unroll.me, a free service that helps users unsubscribe from email subscription lists is another example of a lack of transparency that has caused user backlash. Users were angry when they learned that Unroll.me was scanning their email and selling third parties (Isaac and Lohr, 2017). Such cases where users lack clarity about AI's role raise concerns about data privacy and create barriers to the adoption of AI-based technology.

P1.

Clarity of user and AI's roles is positively associated with the user's willingness to accept AI technology.

AI-based technology improves convenience, efficiency and service speed, providing tremendous value to users, increasing user motivation to embrace, adopt and use those technologies. Whether Alexa updates with related news, or Google Assistant notifies you about upcoming meetings and provides estimates of travel time based on actual traffic data, the information is readily available. It continuously learns the interaction data that these products collect and provides the ability to meet individual needs. For example, users can schedule the most thermostat. Still, when Nest gains insight into their assumptions and identifies relevant behavioral patterns, it will require independent measures to fine-tune their initial schedules to optimize energy efficiency while abiding by their temperature preferences. While AI-based technology can help you perform useful tasks, unlike most SSTs, AI-based technology can be a source of pleasure and enjoyment and provides acoustical value to users. Think of XiaoIce from Microsoft, a chatbot app that imitates human interaction with Alexa's jokes, her favorite songs, her intention to become virtual friends of people. Ever since XiaoIce was introduced to China, the friendly and friendly chatbot has captivated millions of Chinese users (Markoff and Mozur, 2015). According to Agarwal and Karahanna, the perceived absorption is the hedonic technique (Agarwal and Karahanna, 2000; Lowry et al., 2013) and is an essential variable of intrinsic motivation in the context of adoption, which explains why a chatbot so attracts XiaoIce users.

P2.

User's motivation to adopt AI-based technology is positively associated with the user's willingness to accept AI technology.

It refers to the ability of users to perform steps related to their interaction with SST within the SST framework. This configuration needs to be expanded in the context of an AI support service meeting. For example, voice-assisted AI devices can eliminate technology barriers, making it easier to interact with technology regardless of the user's technical capabilities. At the same time, users can evaluate whether AI or the role of technology in the context of the service experience is a user or a degree to which capabilities are enhanced or restricted. For example, users can consider AI as an extension of their ability or physical ability to improve service performance by integrating human and AI capabilities (Wilson and Daugherty, 2018). AI has the potential to democratize services by making them easier to use, but vice versa. Lack of technical expertise or adequate financial resources may prevent users from accessing AI-based technologies, limiting adoption. For example, PwC's Global Consumer Insights Survey recently showed that early AI adopters tend to be more tech-savvy and less price-sensitive than non-adapters (PwC's Global Consumer Insights Survey, 2018).

P3.

User's ability in the context of the adoption of AI-based technology is positively associated with the user's willingness to accept AI technology.

2.2 AI-specific moderators

Compare the success of Microsoft's XiaoIce with the failure of Microsoft's US-based chatbot Tay, which started as Twitter's social bot. The bot had to stop the tie shortly after launch because it interacted with other Twitter users to discuss divisive topics, political quickly and racially charged (Hunt, 2016). Taylor's failure and XiaoIce's success demonstrate the importance of training and achieving high levels of AI performance in the amount and quality of the data collected in the interaction. Users are willing to share their personal information for personalization, leading to the personalization privacy paradox (Lee and Rha, 2016). By limiting privacy disclosure, users need to find the right balance between maximizing the benefits of privacy and minimizing privacy risks. According to Genpact's study of 5,000 respondents in the United States, UK and Australia, privacy issues are one of the significant obstacles to user adoption of AI-based solutions (Genpact, 2017). More than 50% of the survey participants said they felt uncomfortable with the idea of companies using AI to access personal data.

In comparison, even if the user experience improves, 71% said they did not want to use AI to violate privacy protections (Genpact, 2017). At the same time, studies have shown that privacy considerations and awareness of privacy risks harm users' willingness to use personalized services. Still, the value of personal services may be more important than privacy concerns (Awad and Krisnan, 2006). According to a study by Lee and Rha (2016) regarding location-based mobile commerce, increasing confidence in service providers can help alleviate user awareness of privacy risks. So, privacy concern is an essential factor affecting user acceptance of AI-based technologies.

P4.

Privacy concerns related to the use of AI-based technology weaken the relationship between core constructs and the user's willingness to accept AI technology.

When discussing user confidence in AI-based technology, it can be obtained from existing research on automation and human interaction. Concerning automation, Lee and See (2004) define trust as attitudes that help counselors achieve personal goals in situations characterized by uncertainty and vulnerability. Both socio-psychology and marketing literature identify uncertainty. Vulnerabilities as an essential attribute that activates trust in relationships and organizational relationships; when a service meeting is unable to control the actions of a service provider, the vulnerability element occurs because uncertainty occurs, and the results of a meeting directly affect the user. Trust is especially important in the early stages of a relationship. The adoption of new technology when the situation is ambiguous is uncertain. According to Lee and See (2004), trust connects the distance between the nature of automation and the individual's belief in its function and the individual's intention to use and rely on it. Concerning e-commerce, Pavlou (2003) distinguishes between trust in the supplier and trust in the trading medium. This differentiation also applies in the context of AI support service meetings. Trust in service providers and specific AI technologies will contribute to user confidence in AI support services (Flavian et al., 2019; Hernandez-Fernandez and Lewis, 2019; Parra-Lopez et al., 2018). Mayer et al. (1995) identified three key factors that determine the reliability of an organization: competence, integrity and mercy. Capabilities represent domain-specific expertise, skills and capabilities associated with service interactions.

Integrity evaluates whether the user can find and accept the principles that the provider follows. Mercy relates to the coordination between the supplier and the user's motives and intentions. Recent events involving Facebook and Cambridge Analytica have shown inappropriate integrity and charity in the eyes of Facebook users who have collected data without exposing or recognizing Facebook's business model (Rosenberg and Frenkel, 2018). It has caused a sharp drop in public confidence in Facebook (Weisbaum, 2018). In the context of automation, Lee and See (2004) define performance, processes and objectives as the basis for trust. Performance is similar to ability and represents the functionality of technology regardless of whether it is performed in a reliable, predictable and capable manner. The process (method) is to the extent that AI-enabled technologies are suitable for service meetings and can achieve user goals. Users will evaluate service providers' capabilities, integrity and philanthropy, and their experience before, during and after meeting the performance, processes and objectives of AI-enabled technologies. These factors will contribute to the overall level of confidence in new AI support services. The reliability or variability of trust depends on the number of contributors the user recognizes as reliable (McKnight et al., 1998). Regarding the adoption of AI-based solutions in B2B services, Hengstler et al. (2016) found that the transparency of the development process and the gradual introduction of technology are important strategies to increase confidence in innovative development. Companies may be better off introducing new capabilities gradually, in a series of steps that engage users' curiosity and desire for novelty, instead of doing it in one big leap that may alarm users and come across as too big of a departure from more traditional service delivery alternatives.

P5.

User's trust in AI-based technology strengthens the relationship between core constructs and the user's willingness to accept AI technology.

3. Methodology

3.1 Sample and data collection

This study adopted an online survey method using a convenience sampling for data collection. It is instrumental in collecting data from a large number of individuals in a relatively short time and at a better cost. The survey company asked some of the target companies for the survey and acquired employees' email addresses through the human resources management department of target companies with their agreement.

The professional survey company initially contacted 11 employees in the target companies in Korea. Each first-level contact (or “sampling seed”) was asked to forward the invitation email to their colleagues at their organization and to ask those recipients also to send the email to other staff. The potential maximum number of recipients could be assumed to include all employees of the target companies, which numbered over 500 at that time. The seeds of this respondent-driven sampling method (also known as snowball sampling) were diverse in demographic characteristics. However, this method has been challenged due to possible self-selection bias or bias that may arise when the topic of the survey is controversial or when differences in the size of social networks is a factor. None of these reported biases was deemed to apply to the focus of the present study.

According to the theory of social research methodology, it can be said that the response rate is not a big deal as long as the representativeness of sample selection is secured. Of course, there are some prerequisites. Since the survey method of this study is a snowball method, the survey was designed to end when 500 people, 3% of the target company's employees, responded. It was considered reasonable considering the survey budget and sample size.

The professional survey company automatically gave an electronic gift card of the coffee voucher to respondents after completing this survey to increase the response rate and reduce the non-response bias for one month from January 1 to 31 in 2019. All participants received an email explaining the purpose of the survey, emphasizing voluntary participation and asking for an online survey, along with an email with confidence. Upon completing the survey, the participants received an electronic gift card of the coffee voucher as a token to participate in the study. Of the initial pool of participants surveyed, 500 individuals returned completed surveys, yielding a response rate of 100%. After the deletion of surveys with (1) no code identifiers, (2) an excessive number of missing cases, this study was left with a final sample of 454.

The participants are Korean and consist of men (47.6%) and women (52.4%). The age of them includes 20s (24.1%), 30s (25.7%), 40s (25.4%) and 50s (24.8%). The marital status includes unmarried (41.2%) and married (48.8%). The occupation includes office work (66.8%), research and development (33.2%). The level of their education includes middle school (0.6%), high school (16.3%), community college (21.0%), undergraduate (51.4%) and graduate school (10.7%). The income includes under 30,000 USD (27.1%), 30,000–50,000 USD (46.3%) and 50,000–100,000 USD (26.6%).

3.2 Survey instrument

The survey instrument used in this study consisted of two sections: demographic information and main questions. The demographic information section asked questions about gender, age, marital status, occupation, education and income. Regarding main questions, role clarity has five items adapted from Rizzo et al. (1970). Extrinsic motivation has six items and intrinsic motivation has six items adapted from Tyagi (1985). Ability has six items adapted from Jones (1986) and Oliver and Bearden (1985). The measures for privacy risk were adapted from Chellappa and Sin (2005) and Xu et al. (2011), using four questions concerning perceived risks from providing personal information for the use of AI. Trust has three items adapted from Jarvenpaa et al. (1999). Willingness to accept AI technology has three items adapted from Venkatesh et al. (2012) and Lu et al. (2019). All of the responses are measured with 5 Likert scales.

4. Analysis result

4.1 Verification of reliability and validity

The validity of variables was verified through the principal components method and factor analysis with the varimax method. The criteria for determining the number of factors is defined as a 1.0 eigenvalue. This study applied factors for analysis only if the factor loading was greater than 0.5 (factor loading represents the correlation scale between a factor and other variables). The reliability of variables was judged by internal consistency, as assessed by Cronbach's alpha. This study used surveys and regarded each as one measure only if their Cronbach's alpha values were 0.7 or higher. They are role clarity (0.86), extrinsic motivation (0.77), intrinsic motivation (0.81), ability (0.80), privacy concerns (0.74), trust (0.79) and willingness to accept AI technology (0.79).

4.2 Common method bias

As with all self-reported data, there is the potential for the occurrence of common method variance (CMV) (MacKenzie and Podsakoff, 2012; Podsakoff et al., 2003). For alleviating and assessing the magnitude of common method bias, this study adopted several procedural and statistical remedies that Podsakoff et al. (2003) suggest. First, during the survey, respondents were guaranteed anonymity and confidentiality to reduce the evaluation apprehension. Further, this study paid careful attention to the wording of the items and developed the questionnaire carefully to minimize the item ambiguity. These procedures would make them less likely to edit their responses to be more socially desirable, acquiescent and consistent with how they think the researcher wants them to respond when answering the questionnaire (Podsakoff et al., 2003). Second, this study conducted Harman's one-factor test on all of the items. A principal component factor analysis revealed that the first factor only explained 34.1% of the variance. Thus, no single factor emerged, nor did one-factor account for most of the variance.

Furthermore, the measurement model was reassessed with the addition of a latent CMV factor (Podsakoff et al., 2003). All indicator variables in the measurement model were loaded on this factor. The addition of the common variance factor did not improve the fit over the measurement model without that factor, with all indicators still remaining significant. These results do suggest that CMV is not of great concern in this study.

4.3 Relationship between variables

Table 1 summarizes the Pearson correlation test results between variables and reports the degree of multi-collinearity between independent variables. Role clarity (β = 0.021, p < 0.01), extrinsic motivation (β = 0.011, p < 0.01), intrinsic motivation (β = 0.012, p < 0.01), ability (β = 0.012, p < 0.01), privacy concerns (β = −0.111, p < 0.01) and trust (β = 0.042, p < 0.01) are significantly associated with willingness to accept AI technology. The minimum tolerance of 0.812 and the maximum variance inflation factor of 1.231 show that the statistical significance of the data analysis was not compromised by multi-collinearity.

4.4 Hypothesis testing

This study used hierarchical multiple regression analyses of SPSS 24.0 with three-steps to test the hypotheses. In the first step, demographic variables were controlled. Independents were entered in the second step. In the final step, the multiplicative interaction terms between independent factors and moderating variables were entered to test the current hypothesis about the moderating effect directly. Table 2 shows the results. First, among demographic variables, a man (β = 0.043, p < 0.05) is positively related to the willingness to accept AI technology, and age (β = −0.048, p < 0.05) is negatively related to the willingness to accept AI technology. Second, to analyze the relationship between independent variables and the willingness to accept AI technology, model 2 in Table 2 shows that some of the independent variables have statistical significance with game engagement. Role clarity (β = 0.031, p < 0.01) is positively related to willingness to accept AI technology. Extrinsic motivation (β = 0.019, p < 0.01) and intrinsic motivation (β = 0.008, p < 0.01) have positive relationships with willingness to accept AI technology. Ability (β = 0.017, p < 0.01) shows a positive association with willingness to accept AI technology. Therefore, P1P3 are supported.

Lastly, model 3, consisting of moderators, shows the interactions between independent variables and moderating variables on game engagement. Privacy concerns were found to harm the relationship between role clarity and willingness to accept AI technology. (β = −0.063, p < 0.05). Privacy concerns were found to have no significance in the relationship between other independent variables and a willingness to accept AI technology. Trust was found to positively affect the relationship between ability and willingness to accept AI technology. (β = 0.041, p < 0.05). Trust was found to have no significance in the relationship between other independent variables and a willingness to accept AI technology. Therefore, P4 and P5 are partially supported (see Figure 1).

5. Discussion

The purpose of this study was to examine the employee acceptance of AI and explore the AI-specific moderators' effect on that process. The results show that the clarity of user and AI's roles, user's motivation to adopt AI-based technology and user's ability in the context of the adoption of AI-based technology increases their willingness to accept AI technology. And in the results, privacy concerns related to the use of AI-based technology weakens the relationship between role clarity and user's willingness to accept AI technology. And, trust pertaining to the use of AI-based technology strengthens the relationship between ability and user's willingness to accept AI technology.

The relevant studies have shown that privacy considerations and awareness of privacy risks harm users' willingness to use personalized services. The value of personal services may be more important than privacy concerns (Awad and Krisnan, 2006). According to a study by Lee and Rha (2016) regarding location-based mobile commerce, increasing confidence in service providers can help alleviate user awareness of privacy risks. So, this study suggested that privacy concern is an essential factor affecting user acceptance of AI-based technologies. The results show that privacy concerns related to the use of AI-based technology weaken the relationship between only role clarity and user's willingness to accept AI technology. In contrast, privacy concerns do not affect only other independent variables and the user's willingness to accept AI technology. These results mean that privacy concerns are related to the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process.

According to Lee and See (2004), trust connects the distance between the nature of automation and the individual's belief in its function and the individual's intention to use and rely on it. Concerning e-commerce, Pavlou (2003) distinguishes between two aspects: trust in the supplier and trust in the trading medium. This differentiation also applies in the context of AI support service meetings. This study suggested that trust in service providers and specific AI technologies will contribute to user confidence in AI support services. The results show that trust related to the use of AI-based technology strengthens the relationship between only ability and the user's willingness to accept AI technology. Simultaneously, privacy concerns do not affect only other independent variables and the user's willingness to accept AI technology. These results mean that trust is related to the psychological judgment of using AI devices, and user's ability in the context of the adoption of AI-based technology is in the psychological assessment.

6. Conclusion

For research contribution, first, this study is the first one to reveal the role of AI in the context of a front-line service meeting to understand how users accept AI technology-enabled service. Despite growing practical importance, there are few quantitative studies on individual factors that affect their willingness to accept AI technology. However, this study focused on the individual factors of participants directly and especially proposed a model that integrates individual factors rather than identifying fragmentary factors. Although these individual factors may not coexist or even show conflicts, this study showed that these individual factors could coexist in the context of AI use. This study revealed that people who use AI pursue the individual role, motivation and ability related to AI. Second, this study is the first one to understand AI-specific moderators. The results explained that privacy concerns are associated with the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process. And this study explained that trust is related to the psychological judgment of using AI devices, and user's ability in the context of the adoption of AI-based technology is in the psychological assessment.

For practical implications, first, the results of this study show that individual factors such as role, motivation and ability are important to enhance the acceptance of AI. Therefore, AI device developers need to make the AI users perceive that they experience a high level of role clarity, motivation and ability. For example, AI users need to use user interfaces that AI device developers made. Second, the results show that privacy concerns are related to the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process. Therefore, AI device operators need to make AI users perceive that they experience a high level of trust. For example, it would be a good idea to make the privacy process in the role of paly between users and AIs. For example, it would be a good idea to allow various communication (e.g. text, pictures, voice, video, etc.) between users and AIs.

By this research results, the present study could have several insights into the acceptance of users in AI. However, it should also acknowledge the following limitations of this research. First, the present study collected the responses from users in South Korea. There may exist some nation cultural issues in the research context. Future studies should re-test this in other countries to assure these results' reliability. Second, as the variables were all measured simultaneously, it cannot be sure that their relationships are constant. Although the survey questions occurred in reverse order of the analysis model to prevent additional issues, the existence of causal relationships between variables is a possibility. Therefore, future studies need to consider longitudinal studies. Finally, this study uses role clarity, motivation and ability as individual factors and explores privacy concerns and trust as AI-specific moderators. However, considering the characteristics of AI, future studies may find other individual factors and other moderating factors. For example, as other personal factors, locus of control may be considered. Besides, the interaction from AI can be considered as a moderating factor.

Figures

Interaction effect

Figure 1

Interaction effect

Variables' correlation coefficient

123456
1. Role clarity1
2. Extrinsic motivation0.0211
3. Intrinsic motivation0.0120.0241
4. Ability0.0460.1060.0321
5. Privacy concerns−0.0430.011−0.0880.0321
6 .Trust0.0260.0610.0420.057−0.0511
7. Willingness to accept AI technology0.021**0.011**0.012**0.012**−0.111**0.042**

Note(s): p < 0.05, ∗∗p < 0.01

Analysis 1

Willingness to accept AI technology
Model 1Model 2Model 3
Gender0.0430.0370.031
Age−0.048−0.031−0.024
Marital status0.0210.0050.003
Occupation0.0210.0190.011
Education−0.052−0.042−0.029
Income0.0130.0090.003
Role clarity 0.031∗∗0.028∗∗
Extrinsic motivation 0.019∗∗0.014∗∗
Intrinsic motivation 0.0080.005
Ability 0.017∗∗0.015∗∗
Privacy concerns −0.011
Trust 0.012
Role clarity × Privacy concerns −0.063
Extrinsic motivation × Privacy concerns 0.011
Intrinsic motivation × Privacy concerns −0.014
Ability × Privacy concerns 0.101
Role clarity × Trust 0.033
Extrinsic motivation × Trust 0.101
Intrinsic motivation × Trust 0.011
Ability × Trust 0.041∗
Adj. R20.1070.1770.191
F4.644∗∗10.978∗∗15.881∗∗

Note(s): p < 0.05, ∗∗p < 0.01

References

Agarwal, R. and Karahanna, E. (2000), “Time flies when you're having cognitive absorption and beliefs about information technology usage”, MIS Quarterly, Vol. 24 No. 4, p. 665.

Awad, N.F. and Krishnan, M.S. (2006), “The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be profiled online for personalization”, MIS Quarterly, Vol. 30 No. 1, pp. 13-28.

Blut, M., Wang, C. and Schoefer, K. (2016), “Factors influencing the acceptance of self-service technologies: a meta-analysis”, Journal of Service Research, Vol. 19 No. 4, pp. 396-416.

Brown, D. (2003), “When managers balk at doing HR's work”, Canadian HR Reporter, Vol. 16, p. 1.

Chellappa, R.K. and Sin, R.G. (2005), “Personalization versus privacy: an empirical examination of the online Consumer's dilemma”, Information Technology and Management, Vol. 6 No. 2, pp. 181-202.

Dhamija, P. and Bag, S. (2020), “Role of artificial intelligence in operations environment: a review and bibliometric analysis”, The TQM Journal, Vol. 32 No. 4, pp. 869-896.

Flavian, C., Guinalíu, M. and Jordan, P. (2019), “Antecedents and consequences of trust on a virtual team leader”, European Journal of Management and Business Economics, Vol. 28 No. 1, pp. 2-24.

Frank, M., Roehring, P. and Pring, B. (2017), What to Do when Machines Do Everything: How to Get Ahead in a World of AI, Algorithms, Bots and Big Data, John Wiley & Sons, Hoboken, NJ.

Gartner, G.S. (2016), “Hype cycle for emerging technologies identifies three key trends that organizations must track to gain competitive advantage”, Retrieved.

Genpact (2017), “The consumer: sees AI benefits but still prefers the human touch”, available at: http://www.genpact.com/lp/ai-research-consumer (accessed 12 May 2018).

Giovanis, A., Assimakopoulos, C. and Sarmaniotis, C. (2019), “Adoption of mobile self-service retail banking technologies: the role of technology, social, channel and personal factors”, International Journal of Retail and Distribution Management, Vol. 47 No. 9, pp. 894-914.

Heater, B. (2017), “After pushing back, Amazon hands over Echodata in Arkansas murder case”, available at: http://social.techcrunch.com/2017/03/07/amazon-echomurder (accessed 7 June 2018).

Hengstler, M., Enkel, E. and Duelli, S. (2016), “Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices”, Technological Forecasting and Social Change, Vol. 105, pp. 105-120.

Hernandez-Fernandez, A. and Lewis, M.C. (2019), “Brand authenticity leads to perceived value and brand trust”, European Journal of Management and Business Economics, Vol. 28 No. 3, pp. 222-238.

Hoffman, D.L. and Novak, T.P. (2017), “Consumer and object experience in the Internet of Things: an assemblage theory approach”, Journal of Consumer Research, Vol. 44 No. 6, pp. 1178-1204.

Hunt, E. (2016), “Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter”, The Guardian, available at: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter (accessed 24 March 2015).

Isaac, M. and Lohr, S. (2017), “Unroll me service faces backlash over a wide spread practice: selling user data”, The New York Times, available at: https://www.nytimes.com/2017/04/24/technology/personal-data-firm-slice-unroll-me-backlash-uber.html (accessed 7 June 2017).

Jarvenpaa, S.L., Tractinsky, N. and Vitale, M. (1999), “Consumer trust in an Internet store”, Information Technology and Management, Vol. 1 No. 12, pp. 45-71.

Jones, G.R. (1986), “Socialization tactics, self-efficacy, and newcomers' adjustments to organizations”, Academy of Management Journal, Vol. 29 No. 2, pp. 262-279.

Kelly, P., Lawlor, J. and Mulvey, M. (2019), “Self-service technologies in the travel, tourism, and hospitality sectors: principles and practice”, in Ivanov, S. and Webster, C. (Eds), Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality, Emerald Publishing Limited, pp. 57-78.

Lardinois, F. (2017), “Google says its machine learning tech now blocks 99.9% of Gmail spam and phishing messages”, available at: https://techcrunch.com/2017/05/31/google-says-its-machine-learning-tech-now-blocks-99-9-of-gmail-spam-and-phishingmessages/ (accessed 7 June 2018).

Lee, J.D. and See, K.A. (2004), “Trust in automation: designing for appropriate reliance”, Human Factors, Vol. 46 No. 1, pp. 50-80.

Lee, J.M. and Rha, J.Y. (2016), “Personalization–privacy paradox and consumer conflict with the use of location-based mobile commerce”, Computers in Human Behavior, Vol. 63, pp. 453-462.

Leonhard, G. (2016), Technology vs. Humanity: The Coming Clash Between Man and Machine, Fast Future Publishing, New York.

Lowry, P.B., Gaskin, J.E., Twyman, N.W., Hammer, B. and Roberts, T.L. (2013), “Taking ‘fun and games’ seriously: proposing the hedonic-motivation system adoption model (HMSAM)”, Journal of the Association of Information Systems, Vol. 14 No. 11, pp. 617-671.

Lu, L., Cai, R. and Gursoy, D. (2019), “Developing and validating a service robot integration willingness scale”, International Journal of Hospitality Management, Vol. 80, pp. 36-51.

MacKenzie, S.B. and Podsakoff, P.M. (2012), “Common method bias in marketing: causes, mechanisms, and procedural remedies”, Journal of Retailing, Vol. 88 No. 4, pp. 542-555.

Markoff, J. and Mozur, P. (2015), “For sympathetic ear, more Chinese turn to smartphone program”, NY Times.

Marler, J. and Dulebohn, J.H. (2005), “A model of employee self-service technology acceptance”, in Martocchio, J.J. (Ed.), Research in Personnel and Human Resource Management, JAI Press, Greenwich, CT, Vol. 24, pp. 139-182.

Mayer, R.C., Davis, J.H. and Schoorman, F.D. (1995), “An integrative model of organizational trust”, Academy of Management Review, Vol. 20 No. 3, pp. 709-734.

McKnight, D.H., Cummings, L.L. and Chervany, N.L. (1998), “Initial trust formation in new organizational relationships”, Academy of Management Review, Vol. 23 No. 3, p. 473.

Oliver, R.L. and Bearden, W.O. (1985), “Crossover effects in the theory of reasoned action: a moderating influence attempt”, Journal of Consumer Research, Vol. 12, pp. 324-340.

Parra-Lopez, E., Martínez-González, J.A. and Chinea-Martin, A. (2018), “Drivers of the formation of e-loyalty towards tourism destinations”, European Journal of Management and Business Economics, Vol. 27 No. 1, pp. 66-82.

Pavlou, P.A. (2003), “Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model”, International Journal of Electronic Commerce, Vol. 7 No. 3, pp. 101-134.

Podsakoff, P.M., MacKensie, S.B., Lee, J.-Y. and Podsakoff, N.P. (2003), “Common method biases in behavioral research: a critical review of the literature and recommended remedies”, Journal of Applied Psychology, Vol. 88 No. 5, pp. 879-903.

PwC's Global Consumer Insights Survey (2018), “Artificial intelligence: touchpoints with consumers”, available at: https://www.pwc.com/gx/en/retail-consumer/assets/artificial-intelligence-global-consumer-insights-survey.pdf (accessed 7 June 2018).

Rizzo, J.R., House, R.J. and Lirtzman, S.I. (1970), “Role conflict and ambiguity in complex organizations”, Administrative Science Quarterly, Vol. 15 No. 2, pp. 150-163.

Rosen, G. (2017), “Getting our community help in real time”, Facebook Newsroom, available at: https://newsroom.fb.com/news/2017/11/getting-our-communityhelp-in-real-time/ (accessed 7 June 2018).

Rosenberg, M. and Frenkel, S. (2018), “Facebook's role in data misuse sets off storms on two continents”, The New York Times, available at: https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html.

Tyagi, P.K. (1985), “Relative importance of key job dimensions and leadership behaviors in motivating salesperson work performance”, Journal of Marketing, Vol. 49, pp. 76-86.

Upadhyay, A.K. and Khandelwal, K. (2019), “Artificial intelligence-based training learning from application”, Development and Learning in Organizations, Vol. 33 No. 2, pp. 20-23.

van Tonder, E., Saunders, S.G. and de Beer, L.T. (2020), “A simplified approach to understanding customer support and help during self-service encounters”, International Journal of Quality and Reliability Management, Vol. 37 No. 4, pp. 609-634.

Venkatesh, V., Thong, J. and Xu, X. (2012), “Consumer acceptance and user of information technology: extending the unified theory of acceptance and use of technology”, MIS Quarterly, Vol. 36, pp. 157-178.

Wang, X., Yuen, K.F., Wong, Y.D. and Teo, C.-C. (2019), “Consumer participation in last-mile logistics service: an investigation on cognitions and affects”, International Journal of Physical Distribution and Logistics Management, Vol. 49 No. 2, pp. 217-238.

Weisbaum, H. (2018), “Trust in Facebook has dropped by 66 percent since the Cambridge Analytica Scandal”, available at: https://www.nbcnews.com/business/consumer/trust-facebook-has-dropped-51-percent-cambridge-analytica-scandal-n867011 (accessed 28 May 2018).

Wilson, H.J. and Daugherty, P. (2018), “AI will change health care jobs for the better”, Harvard Business Review, available at: https://hbr.org/2018/03/ai-will-change-health-carejobs-for-the-better.

Wu, C.-G. and Wu, P.-Y. (2019), “Investigating user continuance intention toward library self-service technology: the case of self-issue and return systems in the public context”, Library Hi Tech, Vol. 37 No. 3, pp. 401-417.

Xu, H., Luo, X.R., Carroll, J.M. and Rosson, M.B. (2011), “The personalization privacy paradox: an exploratory study of decision making process for location-aware marketing”, Decision Support Systems, Vol. 51 No. 1, pp. 42-52.

Zhai, Y., Yan, J., Zhang, H. and Lu, W. (2020), “Tracing the evolution of AI: conceptualization of artificial intelligence in mass media discourse”, Information Discovery and Delivery, Vol. ahead-of-print No. ahead-of-print. doi: 10.1108/IDD-01-2020-0007.

Further reading

Gibbs, N., Pine, D.W. and Pollack, K. (2017), Artificial Intelligence: The Future of Humankind, Time Books, New York.

LaGrandeur, K. and Hughes, J.J. (Eds), (2017), Surviving the Machine Age. Intelligent Technology and the Transformation of Human Work, Palgrave Macmillan, London.

Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf, New York.

Corresponding author

Youngkeun Choi can be contacted at: penking1@smu.ac.kr

Related articles