A study of employee acceptance of artificial intelligence technology

Purpose – This study aims to reveal the role of artificial intelligence (AI) in the context of a front-line service meeting to understand how users accept AI technology-enabled service. Design/methodology/approach – This study collected 454 Korean employees through online survey methods and used hierarchical regression to test the hypothesis empirically. Findings – In the results, first, clarity of user and AI ’ s roles, user ’ s motivation to adopt AI-based technology and user ’ s ability in the contextof the adoptionof AI-based technologyincreasestheir willingnessto accept AI technology. Second, privacy concerns related to the use of AI-based technology weakens the relationship between role clarity and user ’ s willingness to accept AI technology. And, trust related to the use of AI-based technology strengthens the relationship between ability and user ’ s willingness to accept AI technology. Originality/value – This study is the first one to reveal the role of AI in the context of a front-line service meeting to understand how users accept AI technology-enabled service.


Introduction
Employee self-service (ESS) technology is currently an open innovation of particular interest in the human resource management context because of anticipated cost savings and other efficiency-related benefits (Giovanis et al., 2019;van Tonder et al., 2020). It is a class of webbased technology that allows employees and managers to conduct much of their own data management and transaction processing rather than relying on human resource (HR) or administrative staff to perform these duties (Marler and Dulebohn, 2005). ESS technology can allow employees to update personal information, change their benefits selections or register for training. Shifting such duties to the individual employee enables the organization to devote fewer specialized resources to these activities, often allowing HR to focus on more strategic functions. Despite the intended benefits, the implementation of ESS technology poses many challenges. Because ESS technology functionality is typically not associated with the core functions of professional employees' jobs, these employees may be less motivated to learn and use the ESS technology (Brown, 2003;Marler and Dulebohn, 2005). However, the full adoption of ESS technology is necessary to realize the intended benefits and recoup the significant investments in technology. The history of technology has shown that there is much hype about new technologies, and after the initial inflated expectations, the trough of disillusionment usually follows (Gartner, 2016). Due to trade press and social media posts extolling the virtues of new technologies, managers are keen to jump on a new technology rollercoaster and adopt technological solutions without considering whether they are worth the effort and justify their mystique/novelty. Artificial intelligence (AI) is an example of technology that receives much attention worldwide in the media, academia and politics (Zhai et al., 2020;Dhamija and Bag, 2020). However, international readers' attitudes toward AI range from a positive assessment of human physical labor and new business opportunities (Frank et al., 2017) to a fear of making humans obsolete in a fully robotic society (Leonhard, 2016). Therefore, it is essential to understand the good deeds of AI-based ESS acceptance to increase the chances of success with the introduction of AI-based ESS. However, few researchers have examined how employees adopt AI-based ESS.
For this research gap, this study takes a closer look at the employees' perspective on how and why they embrace a narrow, business-based AI application when service occurs. Therefore, this study presents a conceptual framework based on previous reviews, practices and theories to identify the role of AI in the context of service encounters and explain the employee acceptance of AI in service research. This framework extends a range of AI beyond conventional configuration and self-service technology acceptance theories to include AIspecific variables such as privacy concerns and trust. A process model, organizing salient variables contributing to employee reaction to the introduction of technology to the service encounter, is proposed, and hypotheses testing the relationships between and among these variables are developed. This study concludes with research issues related to the framework that serve as catalysts for future research. It will be the first study to reveal the role of AI at a front-line service conference to understand how users accept services based on AI technology.

Theoretical background and hypothesis development
In service sectors, this study focuses on understanding and theoretically explaining the user acceptance of AI. Previous studies have experimentally investigated the antecedent of selfservice technology (SST) adoption and include critical variables in this theoretical framework in the model (Wu and Wu, 2019;Wang et al., 2019;Kelly et al., 2019). In their work, the user adoption of Meuter et al. SST is user clarity (Do you know how to use and how to perform SST?), and motivation (Why use SST to induce the user to try?), and capabilities (Do you have the resources and ability to use SST?). This core configuration is influenced by the nature of the technology itself and by the user's differences. Later, the meta-analysis of SST acceptance explained the complexity of the variables affecting SST acceptance (Blut et al., 2016). In addition to what we already know about SST acceptance, this study believes that the acceptance of AI in service meetings depends on other AI-specific variables other than those traditionally studied in SST studies. This set of variables includes privacy issues, technology and trust in the company, and awareness of the horror of the technology.

Core construct
Unlike SST, AI-based technology can also act as an independent agent, whether users are aware of AI behavior (Hoffman and Novak, 2017;Upadhyay and Khandelwal, 2019). For example, Google's spam filter, one of AI's first applications, detects and blocks 99.9% of spam and phishing messages without user input (Lardinois, 2017). Facebook recently introduced an AI-based suicide prevention tool that provides support, such as a proposal to surprise users who express suicidal thoughts, contact friends or family members, contact helplines and provide information on available help resources (Rosen, 2017). The concept of role clarity should be expanded to include clarity about the role of users and AI in the service process. During the access to AI support technology, users need to understand that both sides contribute to joint production services. The clarity of a role is remarkable from two perspectives. (1) Establishing responsibility sharing in joint services and (2) promoting user confidence in technology through transparency.
It is up to two actors (user and AI) to perform the part according to the design to achieve the desired service results. Role clarity is essential to ensure the successful integration of AI inputs with users. It ensures that the user understands the steps AI performs to design the service delivery steps and to provide seamless service performance. Misunderstanding or lack of role clarity can result in undesirable and tragic consequences when the steak is exceptionally high. For example, in 2013, the Asiana Airlines crash in San Francisco was a disastrous result of insufficient role clarity. The pilot, who relies on the plane's autopilot, expected the automatic control system to come out of its idle position on its own when the plane begins to lose speed. Users can be involved in AI but lack the role clarity when AI appears in the same context as self-driving cars. What activities does an AI-enabled vehicle carry out, and what does the user do? Role clarity can also indicate transparency about the nature of a meeting, which forms the basis of trust (Hengstler et al., 2016). Because AI can act as an independent agent, the level of transparency in AI roles in meetings can affect user confidence in the technology. Failure to fully disclose the role of the AI agent and its behavior during and after the meeting may erode user confidence in the technology and service providers.
Therefore, role clarity can include questions about the data that AI collects during its interactions and how it uses the data during and after its occurrence. Amazon sent news when its criminal investigation ordered it to submit audio recordings made with personal echo devices as evidence (Heater, 2017). Many users were surprised to learn that their Alexa recorded and stored audio even if the owner of the device was not activated. Unroll.me, a free service that helps users unsubscribe from email subscription lists is another example of a lack of transparency that has caused user backlash. Users were angry when they learned that Unroll.me was scanning their email and selling third parties (Isaac and Lohr, 2017). Such cases where users lack clarity about AI's role raise concerns about data privacy and create barriers to the adoption of AI-based technology.
P1. Clarity of user and AI's roles is positively associated with the user's willingness to accept AI technology.
AI-based technology improves convenience, efficiency and service speed, providing tremendous value to users, increasing user motivation to embrace, adopt and use those technologies. Whether Alexa updates with related news, or Google Assistant notifies you about upcoming meetings and provides estimates of travel time based on actual traffic data, the information is readily available. It continuously learns the interaction data that these products collect and provides the ability to meet individual needs. For example, users can schedule the most thermostat. Still, when Nest gains insight into their assumptions and identifies relevant behavioral patterns, it will require independent measures to fine-tune their initial schedules to optimize energy efficiency while abiding by their temperature preferences. While AI-based technology can help you perform useful tasks, unlike most SSTs, AI-based technology can be a source of pleasure and enjoyment and provides acoustical value to users. Think of XiaoIce from Microsoft, a chatbot app that imitates human interaction with Alexa's jokes, her favorite songs, her intention to become virtual friends of people. Ever since XiaoIce was introduced to China, the friendly and friendly chatbot has captivated millions of Chinese users (Markoff and Mozur, 2015). According to Agarwal and Karahanna, the perceived absorption is the hedonic technique (Agarwal and Karahanna, 2000;Lowry et al., 2013) and is an essential variable of intrinsic motivation in the context of adoption, which explains why a chatbot so attracts XiaoIce users.
P2. User's motivation to adopt AI-based technology is positively associated with the user's willingness to accept AI technology.

EJMBE 30,3
It refers to the ability of users to perform steps related to their interaction with SST within the SST framework. This configuration needs to be expanded in the context of an AI support service meeting. For example, voice-assisted AI devices can eliminate technology barriers, making it easier to interact with technology regardless of the user's technical capabilities. At the same time, users can evaluate whether AI or the role of technology in the context of the service experience is a user or a degree to which capabilities are enhanced or restricted. For example, users can consider AI as an extension of their ability or physical ability to improve service performance by integrating human and AI capabilities (Wilson and Daugherty, 2018). AI has the potential to democratize services by making them easier to use, but vice versa. Lack of technical expertise or adequate financial resources may prevent users from accessing AI-based technologies, limiting adoption. For example, PwC's Global Consumer Insights Survey recently showed that early AI adopters tend to be more tech-savvy and less pricesensitive than non-adapters (PwC's Global Consumer Insights Survey, 2018).
P3. User's ability in the context of the adoption of AI-based technology is positively associated with the user's willingness to accept AI technology.

AI-specific moderators
Compare the success of Microsoft's XiaoIce with the failure of Microsoft's US-based chatbot Tay, which started as Twitter's social bot. The bot had to stop the tie shortly after launch because it interacted with other Twitter users to discuss divisive topics, political quickly and racially charged (Hunt, 2016). Taylor's failure and XiaoIce's success demonstrate the importance of training and achieving high levels of AI performance in the amount and quality of the data collected in the interaction. Users are willing to share their personal information for personalization, leading to the personalization privacy paradox (Lee and Rha, 2016). By limiting privacy disclosure, users need to find the right balance between maximizing the benefits of privacy and minimizing privacy risks. According to Genpact's study of 5,000 respondents in the United States, UK and Australia, privacy issues are one of the significant obstacles to user adoption of AI-based solutions (Genpact, 2017). More than 50% of the survey participants said they felt uncomfortable with the idea of companies using AI to access personal data. In comparison, even if the user experience improves, 71% said they did not want to use AI to violate privacy protections (Genpact, 2017). At the same time, studies have shown that privacy considerations and awareness of privacy risks harm users' willingness to use personalized services. Still, the value of personal services may be more important than privacy concerns (Awad and Krisnan, 2006). According to a study by Lee and Rha (2016) regarding location-based mobile commerce, increasing confidence in service providers can help alleviate user awareness of privacy risks. So, privacy concern is an essential factor affecting user acceptance of AI-based technologies.
P4. Privacy concerns related to the use of AI-based technology weaken the relationship between core constructs and the user's willingness to accept AI technology.
When discussing user confidence in AI-based technology, it can be obtained from existing research on automation and human interaction. Concerning automation, Lee and See (2004) define trust as attitudes that help counselors achieve personal goals in situations characterized by uncertainty and vulnerability. Both socio-psychology and marketing literature identify uncertainty. Vulnerabilities as an essential attribute that activates trust in relationships and organizational relationships; when a service meeting is unable to control the actions of a service provider, the vulnerability element occurs because uncertainty occurs, and the results of a meeting directly affect the user. Trust is especially important in the early stages of a relationship. The adoption of new technology when the situation is ambiguous is uncertain. According to Lee and See (2004), trust connects the distance between the nature of automation and the individual's belief in its function and the individual's intention to use and rely on it. Concerning e-commerce, Pavlou (2003) distinguishes between trust in the supplier and trust in the trading medium. This differentiation also applies in the context of AI support service meetings. Trust in service providers and specific AI technologies will contribute to user confidence in AI support services (Flavian et al., 2019;Hernandez-Fernandez and Lewis, 2019;Parra-Lopez et al., 2018). Mayer et al. (1995) identified three key factors that determine the reliability of an organization: competence, integrity and mercy. Capabilities represent domain-specific expertise, skills and capabilities associated with service interactions. Integrity evaluates whether the user can find and accept the principles that the provider follows. Mercy relates to the coordination between the supplier and the user's motives and intentions. Recent events involving Facebook and Cambridge Analytica have shown inappropriate integrity and charity in the eyes of Facebook users who have collected data without exposing or recognizing Facebook's business model (Rosenberg and Frenkel, 2018). It has caused a sharp drop in public confidence in Facebook (Weisbaum, 2018). In the context of automation, Lee and See (2004) define performance, processes and objectives as the basis for trust. Performance is similar to ability and represents the functionality of technology regardless of whether it is performed in a reliable, predictable and capable manner. The process (method) is to the extent that AI-enabled technologies are suitable for service meetings and can achieve user goals. Users will evaluate service providers' capabilities, integrity and philanthropy, and their experience before, during and after meeting the performance, processes and objectives of AI-enabled technologies. These factors will contribute to the overall level of confidence in new AI support services. The reliability or variability of trust depends on the number of contributors the user recognizes as reliable (McKnight et al., 1998). Regarding the adoption of AI-based solutions in B2B services, Hengstler et al. (2016) found that the transparency of the development process and the gradual introduction of technology are important strategies to increase confidence in innovative development. Companies may be better off introducing new capabilities gradually, in a series of steps that engage users' curiosity and desire for novelty, instead of doing it in one big leap that may alarm users and come across as too big of a departure from more traditional service delivery alternatives.
P5. User's trust in AI-based technology strengthens the relationship between core constructs and the user's willingness to accept AI technology.

Sample and data collection
This study adopted an online survey method using a convenience sampling for data collection. It is instrumental in collecting data from a large number of individuals in a relatively short time and at a better cost. The survey company asked some of the target companies for the survey and acquired employees' email addresses through the human resources management department of target companies with their agreement. The professional survey company initially contacted 11 employees in the target companies in Korea. Each first-level contact (or "sampling seed") was asked to forward the invitation email to their colleagues at their organization and to ask those recipients also to send the email to other staff. The potential maximum number of recipients could be assumed to include all employees of the target companies, which numbered over 500 at that time. The seeds of this respondent-driven sampling method (also known as snowball sampling) were diverse in demographic characteristics. However, this method has been challenged due to possible self-selection bias or bias that may arise when the topic of the survey is controversial EJMBE 30,3 or when differences in the size of social networks is a factor. None of these reported biases was deemed to apply to the focus of the present study.
According to the theory of social research methodology, it can be said that the response rate is not a big deal as long as the representativeness of sample selection is secured. Of course, there are some prerequisites. Since the survey method of this study is a snowball method, the survey was designed to end when 500 people, 3% of the target company's employees, responded. It was considered reasonable considering the survey budget and sample size.
The professional survey company automatically gave an electronic gift card of the coffee voucher to respondents after completing this survey to increase the response rate and reduce the non-response bias for one month from January 1 to 31 in 2019. All participants received an email explaining the purpose of the survey, emphasizing voluntary participation and asking for an online survey, along with an email with confidence. Upon completing the survey, the participants received an electronic gift card of the coffee voucher as a token to participate in the study. Of the initial pool of participants surveyed, 500 individuals returned completed surveys, yielding a response rate of 100%. After the deletion of surveys with (1) no code identifiers, (2) an excessive number of missing cases, this study was left with a final sample of 454.

Survey instrument
The survey instrument used in this study consisted of two sections: demographic information and main questions. The demographic information section asked questions about gender, age, marital status, occupation, education and income. Regarding main questions, role clarity has five items adapted from Rizzo et al. (1970). Extrinsic motivation has six items and intrinsic motivation has six items adapted from Tyagi (1985). Ability has six items adapted from Jones (1986) and Oliver and Bearden (1985). The measures for privacy risk were adapted from Chellappa and Sin (2005) and Xu et al. (2011), using four questions concerning perceived risks from providing personal information for the use of AI. Trust has three items adapted from Jarvenpaa et al. (1999). Willingness to accept AI technology has three items adapted from Venkatesh et al. (2012) and Lu et al. (2019). All of the responses are measured with 5 Likert scales.

Analysis result 4.1 Verification of reliability and validity
The validity of variables was verified through the principal components method and factor analysis with the varimax method. The criteria for determining the number of factors is defined as a 1.0 eigenvalue. This study applied factors for analysis only if the factor loading was greater than 0.5 (factor loading represents the correlation scale between a factor and other variables). The reliability of variables was judged by internal consistency, as assessed by Cronbach's alpha. This study used surveys and regarded each as one measure only if their Cronbach's alpha values were 0.7 or higher. They are role clarity (0.86), extrinsic motivation (0.77), intrinsic motivation (0.81), ability (0.80), privacy concerns (0.74), trust (0.79) and willingness to accept AI technology (0.79).

Common method bias
As with all self-reported data, there is the potential for the occurrence of common method variance (CMV) (MacKenzie and Podsakoff, 2012;Podsakoff et al., 2003). For alleviating and assessing the magnitude of common method bias, this study adopted several procedural and statistical remedies that Podsakoff et al. (2003) suggest. First, during the survey, respondents were guaranteed anonymity and confidentiality to reduce the evaluation apprehension. Further, this study paid careful attention to the wording of the items and developed the questionnaire carefully to minimize the item ambiguity. These procedures would make them less likely to edit their responses to be more socially desirable, acquiescent and consistent with how they think the researcher wants them to respond when answering the questionnaire (Podsakoff et al., 2003). Second, this study conducted Harman's one-factor test on all of the items. A principal component factor analysis revealed that the first factor only explained 34.1% of the variance. Thus, no single factor emerged, nor did one-factor account for most of the variance.
Furthermore, the measurement model was reassessed with the addition of a latent CMV factor (Podsakoff et al., 2003). All indicator variables in the measurement model were loaded on this factor. The addition of the common variance factor did not improve the fit over the measurement model without that factor, with all indicators still remaining significant. These results do suggest that CMV is not of great concern in this study. Table 1 summarizes the Pearson correlation test results between variables and reports the degree of multi-collinearity between independent variables. Role clarity (β 5 0.021, p < 0.01), extrinsic motivation (β 5 0.011, p < 0.01), intrinsic motivation (β 5 0.012, p < 0.01), ability (β 5 0.012, p < 0.01), privacy concerns (β 5 À0.111, p < 0.01) and trust (β 5 0.042, p < 0.01) are significantly associated with willingness to accept AI technology. The minimum tolerance of 0.812 and the maximum variance inflation factor of 1.231 show that the statistical significance of the data analysis was not compromised by multi-collinearity.

Hypothesis testing
This study used hierarchical multiple regression analyses of SPSS 24.0 with three-steps to test the hypotheses. In the first step, demographic variables were controlled. Independents were entered in the second step. In the final step, the multiplicative interaction terms between independent factors and moderating variables were entered to test the current hypothesis about the moderating effect directly. Table 2 shows the results. First, among demographic variables, a man (β 5 0.043, p < 0.05) is positively related to the willingness to accept AI technology, and age (β 5 À0.048, p < 0.05) is negatively related to the willingness to accept AI technology. Second, to analyze the relationship between independent variables  Table 2 shows that some of the independent variables have statistical significance with game engagement. Role clarity (β 5 0.031, p < 0.01) is positively related to willingness to accept AI technology. Extrinsic motivation (β 5 0.019, p < 0.01) and intrinsic motivation (β 5 0.008, p < 0.01) have positive relationships with willingness to accept AI technology. Ability (β 5 0.017, p < 0.01) shows a positive association with willingness to accept AI technology. Therefore, P1-P3 are supported.
Lastly, model 3, consisting of moderators, shows the interactions between independent variables and moderating variables on game engagement. Privacy concerns were found to harm the relationship between role clarity and willingness to accept AI technology. (β 5 À0.063, p < 0.05). Privacy concerns were found to have no significance in the relationship between other independent variables and a willingness to accept AI technology. Trust was found to positively affect the relationship between ability and willingness to accept AI technology. (β 5 0.041, p < 0.05). Trust was found to have no significance in the relationship between other independent variables and a willingness to accept AI technology. Therefore, P4 and P5 are partially supported (see Figure 1).

Discussion
The purpose of this study was to examine the employee acceptance of AI and explore the AIspecific moderators' effect on that process. The results show that the clarity of user and AI's roles, user's motivation to adopt AI-based technology and user's ability in the context of the adoption of AI-based technology increases their willingness to accept AI technology. And in the results, privacy concerns related to the use of AI-based technology weakens the relationship between role clarity and user's willingness to accept AI technology. And, trust The relevant studies have shown that privacy considerations and awareness of privacy risks harm users' willingness to use personalized services. The value of personal services may be more important than privacy concerns (Awad and Krisnan, 2006). According to a study by Lee and Rha (2016) regarding location-based mobile commerce, increasing confidence in service providers can help alleviate user awareness of privacy risks. So, this study suggested that privacy concern is an essential factor affecting user acceptance of AIbased technologies. The results show that privacy concerns related to the use of AI-based technology weaken the relationship between only role clarity and user's willingness to accept AI technology. In contrast, privacy concerns do not affect only other independent variables and the user's willingness to accept AI technology. These results mean that privacy concerns are related to the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process. According to Lee and See (2004), trust connects the distance between the nature of automation and the individual's belief in its function and the individual's intention to use and rely on it. Concerning e-commerce, Pavlou (2003) distinguishes between two aspects: trust in the supplier and trust in the trading medium. This differentiation also applies in the context of AI support service meetings. This study suggested that trust in service providers and specific AI technologies will contribute to user confidence in AI support services. The results show that trust related to the use of AI-based technology strengthens the relationship between only ability and the user's willingness to accept AI technology. Simultaneously, privacy concerns do not affect only other independent variables and the user's willingness to accept AI technology. These results mean that trust is related to the psychological judgment of using AI devices, and user's ability in the context of the adoption of AI-based technology is in the psychological assessment.

Conclusion
For research contribution, first, this study is the first one to reveal the role of AI in the context of a front-line service meeting to understand how users accept AI technology-enabled service. Despite growing practical importance, there are few quantitative studies on individual factors that affect their willingness to accept AI technology. However, this study focused on the individual factors of participants directly and especially proposed a model that integrates individual factors rather than identifying fragmentary factors. Although these individual factors may not coexist or even show conflicts, this study showed that these individual factors could coexist in the context of AI use. This study revealed that people who use AI pursue the individual role, motivation and ability related to AI. Second, this study is the first one to understand AI-specific moderators. The results explained that privacy concerns are associated with the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process. And this study explained that trust is related to the psychological judgment of using AI devices, and user's ability in the context of the adoption of AI-based technology is in the psychological assessment.
For practical implications, first, the results of this study show that individual factors such as role, motivation and ability are important to enhance the acceptance of AI. Therefore, AI device developers need to make the AI users perceive that they experience a high level of role clarity, motivation and ability. For example, AI users need to use user interfaces that AI device developers made. Second, the results show that privacy concerns are related to the functional process of using AI devices, and user and AI's roles in using AI devices are in the functional process. Therefore, AI device operators need to make AI users perceive that they experience a high level of trust. For example, it would be a good idea to make the privacy process in the role of paly between users and AIs. For example, it would be a good idea to allow various communication (e.g. text, pictures, voice, video, etc.) between users and AIs.
By this research results, the present study could have several insights into the acceptance of users in AI. However, it should also acknowledge the following limitations of this research. First, the present study collected the responses from users in South Korea. There may exist some nation cultural issues in the research context. Future studies should re-test this in other countries to assure these results' reliability. Second, as the variables were all measured simultaneously, it cannot be sure that their relationships are constant. Although the survey questions occurred in reverse order of the analysis model to prevent additional issues, the existence of causal relationships between variables is a possibility. Therefore, future studies need to consider longitudinal studies. Finally, this study uses role clarity, motivation and ability as individual factors and explores privacy concerns and trust as AI-specific moderators. However, considering the characteristics of AI, future studies may find other individual factors and other moderating factors. For example, as other personal factors, locus of control may be considered. Besides, the interaction from AI can be considered as a moderating factor.