Hope, tolerance and empathy: employees ’ emotions when using an AI-enabled chatbot in a digitalised workplace

Purpose – InformationSystemsresearchonemotionsinrelationtousingtechnologylargelyholdsessentialist assumptions about emotions, focuses on negative emotions and treats technology as a token or as a black box, which hinders an in-depth understanding of distinctions in the emotional experience of using artificial intelligence (AI) technology in context. This research focuses on understanding employees ’ emotional experiences of using an AI chatbot as a specific type of AI system that learns from how it is used and is conversational, displaying a social presence to users. The research questions how and why employees experience emotions when using an AI chatbot, and how these emotions impact its use. Design/methodology/approach – An interpretive case study approach and an inductive analysis were adopted for this study. Data were collected through interviews, documents review and observation of use. Findings – The study found that employee appraisals of chatbots were influenced by the form and functional design of the AI chatbot technology and its organisational and social context, resulting in a wider repertoire of appraisals and multiple emotions. In addition to positive and negative emotions, users experienced connection emotions.Thefindingsshowthattheexistenceofmultiple emotionscanencouragecontinueduseofanAI chatbot. Originality/value – This research extends information systems literature on emotions by focusing on the lived experiences of employees in their actual use of an AI chatbot, while considering its characteristics and its organisational and social context. The findings inform the emerging literature on AI. Tate et al. , 2015). Hence, we focused on reviewing articles that represent the IS field in nine leading journals. We selected the Journal of Information Technology and People because its published mission is to understand the implications of IT for people in society and in their daily work in organisations (Information Technology and People, 2021). We then selected eight journals identified by the Association of Information Systems as representative of the IS field (Fitzgerald et al. , 2019; Sarker et al. , 2019). These were the European Journal of Information Systems , the Information Systems Journal , Information Systems Research , the Journal of the Association for Information Systems , the Journal of Information Technology , the Journal of Management Information Systems , the Journal of Strategic Information Systems and MIS Quarterly . Our search covered publications from 1982 to 2021 and returned 134 articles. Following the recommendations made by Par (cid:2) e et al. (2015), we set our inclusion criteria as: (1) include articles that report on an empirical study; and (2) include articles that examine an organisational context. The first criterion excluded research commentaries, literature reviews and agenda-setting papers; the second criterion excluded research on public and consumer use of Twitter, Facebook, blogs, Wikis and other social media, in addition to excluding research on consumer use of websites and e-commerce services. This resulted in 43 papers that were both empirical and organisational (i.e. examining emotions related to IS in organisations) being selected for analysis. Our analysis of these articles covered the research method, the study of IS/IT, the identification of a specific IS, the nature of IS use and the types of emotions studied. The review showed that the IS literature on emotions in organisations has maintained a strong bias towards quantitative research, negative emotions and intentions to use, and the majority of studies treated IS and IT as either a token or a black box, as depicted in the figures below.


Introduction
Workplaces are facing a new wave of digitalisation in which artificial intelligence (AI) technology is being implemented at speed to automate a variety of work processes. The AI chatbot is one type of AI applications that is increasingly being adopted in business AI-enabled chatbot and employees' emotions (Bavaresco et al., 2020;Johannsen et al., 2021). Indeed, the global chatbot market is expected to be worth around $6 billion by 2023 (Market Research Future, 2019). From an organisational perspective, chatbots can be externally facing (interacting with consumers) or internally facing (interacting with employees). Externally facing chatbots can be used in marketing, advertising, customer services, e-commerce, healthcare and education, among other settings. They provide a wide range of services to customers, from finding information to initiating transactions, and from giving advice to guiding learning. These externally facing chatbots are often referred to as virtual customer assistants. In contrast, internally facing chatbots provide in-organisation services to employees to help them find information inside and outside the organisation, perform tasks and navigate in-organisation systems and documents. They are commonly referred to as virtual enterprise assistants (Gkinko and Elbanna, 2020a). Given that 70% of white-collar workers are expected to interact with AI chatbots on a daily basis by 2022 (Goasduff, 2019), this study focuses on internally facing AI chatbots. This class of AI applications holds the potential for improving and accelerating workflow and increasing productivity by providing quick and easy access to various business processes and information at a low cost (Johannsen et al., 2021;Przegalinska et al., 2019). They are also argued to contribute to the creation of a friendlier workplace with better and more satisfying experiences for employees (Panetta, 2020), and to appeal to the digital native generations who are now entering the workplace (Goasduff, 2019;Toader et al., 2020).
Research on workplace AI chatbot is in its infancy, despite its rapid market growth and potentials (Gkinko and Elbanna, 2021). An AI chatbot is a distinct class of information system that operates autonomously and proactively, learns from and adapts to users, reacts to the environment and communicates with users by using natural language processing and machine learning; hence, they can have personalised, intelligent and human-like conversation with users (Moussawi et al., 2021). The debate surrounding AI technology in the workplace focuses on its capacity to replace humans . However, little is known about employees' emotional experiences of and reactions to using this technology (Johannsen et al., 2021;Maedche et al., 2019;Meyer von Wolff et al., 2019;Moussawi et al., 2021). This is despite the key role of emotions in the workplace and in the adoption, implementation, use and continued use of technology (McGrath, 2006). Therefore, this study aims to examine users' emotional experiences in their actual use of an AI chatbot in the workplace.
Our motivation for studying the actual use of AI chatbot in the workplace in and employees' lived experience of emotions responds to scholarly calls for researchers to move beyond essentialist assumptions and the researching of intentions to examine actual use in context (De Guinea and Markus, 2009). Indeed, information systems (IS) research has disproportionately relied on positivist methods, which assume that emotions are an intrinsic fixed psychological state and that respondents are aware of, and can directly report on emotions that are pre-identified and hypothesised by researchers. These essentialist assumptions have been refuted by constructionist and phenomenologist scholars, who argue that emotions are lived experiences and not solely a psychological state and that the experience of emotions (in terms of feeling, expression and behaviour) is not fixed. They strongly advocate a qualitative approach to understanding the experience of emotions (Denzin, 2017;Fineman, 2010). They argue that "emotions do not surface ready-made from the depths of the individual soul" but "to different degrees . . . lie at the intersection between individual and culture" (Gabriel, 1998, p. 311), where people's experiences of particular objects or events are interpreted in a socially and culturally laden manner (Stets, 2006). Hence, constructionists and phenomenologists emphasise that "emotionality is a social act lodged in the social situation . . . [and that] lived experience is the unit of analysis" (Denzin, 1985, p. 224). Accordingly, it is through asking about an experience itself that researchers can gain access to the underlying emotions (Gill and Burrow, 2018). Constructionist scholars also argue that the qualitative texture of emotions, revealed through interpretive approaches, could provide a close understanding of the experience of emotions by paying attention to the context as well as the stimulus (Fineman, 2005). This is especially important for studies on the use of technology in organisations, where individuals are influenced by the task, technological and organisational contexts (De Guinea and Markus, 2009).
This study aims to not only take the organisational context into consideration but also adopts a sociotechnical approach that takes the characteristics of AI chatbots seriously (Elbanna, 2016). This responds to calls urging information systems (IS) scholars not to treat technology as a "token" (Sarker et al., 2019) or a "black box" (Orlikowski and Iacono, 2001). Indeed, in examining users' emotions relating to the use of information technology (IT) in organisations, positivist studies have treated IT either as a token opportunity for research, where no particular technology is specified, or as a black box, with little consideration of the specific characteristics and features of the technology in context.
Against this backdrop, this study questions how and why employees experience emotions when using an AI chatbot, and how these emotions impact their use of the chatbot. To answer the research questions, we adopted an inductive interpretive approach and collected qualitative data from a large international organisation through interviews, document review and observation of use and adopted an inductive approach to data analysis. The findings revealed that employees' emotions when using AI chatbots arose from a wider repertoire of appraisals than the two previously identified in the use of traditional systems which are goal achievement and control over the expected outcomes Pinsonneault, 2005, 2010;Stein et al., 2015). The findings also uncovered that employees experienced distinctive emotions of connection that stemmed from the form and functional design of the AI chatbot. In that regard, users interpreted the functional characteristics of machine learning as human-like learning and felt responsible for educating the chatbot. The findings highlight that the existence of multiple appraisals and, hence, multiple emotions where emotions offset or enhance each other and propel users to continue using the chatbot. Excitement, hope and playfulness, in addition to the connection emotion of empathy, led to tolerance and the continued use of the chatbot in a forgiving way, in spite of the frustration experienced when the chatbot provided wrong results. It was only when users experienced solely strong negative emotions that they stopped using the chatbot. The study contributes to the literature in several ways. First, it extends the IS literature on emotions to consider the unique characteristics of an AI chatbot and its organisational and social context. Second, the study contributes to the nascent understanding of AI technology in organisations by identifying four categories of emotions experienced by users including connection emotions as a new category that is distinctly related to AI chatbots. Third, it extends the study of emotions as lived experience by providing an in-depth qualitative interpretive understanding of emotions experienced by users in context. Fourth, the study goes beyond the dominant focus on intention to use to focus on the actual use of AI chatbot and in particular, its actual continuous use. In doing so, it adds to the nascent literature on the actual use of AI applications in the workplace.
Following this introduction, Section 2 of this paper discusses the distinct characteristics of AI chatbots. Section 3 reviews the information systems literature on emotions, and Section 4 demonstrates the research methodology. Section 5 presents the research findings. The final section discusses the findings and their theoretical and practical contributions and concludes the study by highlighting its limitations and opportunities for further research.

Chatbots: a new class of intelligent systems in the workplace
To be specific about the technology referred to in this paper, this section explains the technical capabilities and features of AI chatbot, how they differ from other organisational IS, and why examining emotions relating to the use of AI chatbots is integral to their development and continued use.

AI-enabled chatbot and employees' emotions
In this paper, we use the term AI chatbot to refer to any conversational agent or software application that engages with the user in spoken and/or written dialogue through AI technology, including natural language processing, machine learning and algorithmic decision making (Dale, 2019). This definition excludes dialogue systems known as interactive voice response or voice user interfaces because their functions are limited to users selecting from a fixed, pre-programmed menu of options, and they do not employ AI or algorithms for machine learning or natural language processing (McTear, 2017;Radziwill and Benton, 2017;Smutny and Schreiberova, 2020). Our definition also excludes AI chatbots embodied in physical robots connected to Internet of Things (IoT). This study focuses on text-based AI chatbots because these currently represent the most commonly implemented chatbot technology in organisations.
AI chatbots can be grouped into three categories based on the user input: button/menu, keyword recognition and contextual/intention recognition. Button/menu chatbots profile users according to their selections and channel options that are likely to meet a user's needs or fit their previous choices; keyword-recognition chatbots recognise specific keywords and determine an appropriate response; and contextual/intention chatbots interpret the user's intention and infer the most appropriate answer. As they encounter different types of use over time, AI chatbots in all these categories learn and develop by improving the selections or menus offered to the user, recognising more keywords and their combinations or improving their understanding of language and expressions (Gupta et al., 2020;Sheth et al., 2019;Smutny and Schreiberova, 2020).
From a design perspective, AI chatbotsin particular, the keyword-recognition and contextual/intention typesshare some dimensions of form and function that together distinguish them from other types of technology and organisational systems. Form and function are two key dimensions of technology that are commonly used in design theory and in the conceptualisation of IT (Botzenhardt et al., 2016;Townsend et al., 2011). Form refers to certain user interfaces, hedonic features and aesthetics, and the symbolic meaning through which functions are delivered; function refers to the utility, functional characteristics, specifications and standard architecture (Rietz et al., 2019;Townsend et al., 2011). To provide a consistent conceptual understanding of AI chatbot technology, we use the key design components of form and function when discussing the characteristics of AI chatbots. This is in line with our aim of taking technology seriously when understanding users' emotional experiences of using it in context.
In this regard, the functions of AI chatbots include the use of natural language processing and machine learning to answer users' queries on demand, while the form includes the chatbot's conversational and interactive interface, social presence and embodiment. Table 1 summarises the characteristics of AI chatbots.
Regarding natural language processing, AI chatbots simulate human behaviour by allowing users to engage with these dialogue systems in a flexible and unstructured way, which is different from the structured and pre-specified way traditional information systems work (Ahmad et al., 2018). In this regard, chatbots generally consist of three components: natural language understanding, used to categorise the user's intent; dialogue management, used to determine the user's intent; and natural language generation, used to generate a response in natural language (Suta et al., 2020).
Regarding machine learning, AI chatbots learn, adapt and evolve as they are used, continuously improving their functions (Mendling et al., 2018;Suta et al., 2020). An AI chatbot's ability to learn is facilitated by user interactions, because using the chatbot builds the data required to further train the chatbot's algorithms, improve its responses and debug its models. Therefore, unlike traditional enterprise systems, AI chatbots need to be used so they can continue to develop and evolve their functionality in the organisation.
Machine learning distinguish AI chatbots from other enterprise systems as it puts use in the centre of AI functioning and the continuous improvement of its operation. In addition, their form further distinguishes them from other (traditional and AI) systems. Their interactive communication and social capabilities incorporate human characteristics and imitate habitual human-to-human communication (Chaves and Gerosa, 2021;Gnewuch et al., 2017). This engagement in conversation with users and demonstration of the topic discussed and evolving context and flow could be reactive, proactive or autonomous depending on user inputs and changes in the environment (Meyer von Wolff et al., 2019).
AI chatbots also exhibit social presence aiming to create a feeling of togetherness and of living in the same world as their users which may influence users' behaviour and trigger their response (Toader et al., 2020). Social presence includes human-like appearance, language style, personality, interactivity and assumed agency (Gnewuch et al., 2017;Nass and Moon, 2000). Moreover, an AI chatbot could be physically embodied in a robot or virtually embodied as a face, icon or facial expressions, with or without animation (Araujo, 2018;Sheehan et al., 2020). A chatbot's social presence and virtual embodiment produce human likenesses, known as anthropomorphic features. It is argued that anthropomorphic features provide experiential value for users (Hoyer et al., 2020), which may lead to users developing social and emotional bonds with the AI technology (Derrick et al., 2013;Rietz et al., 2019).

Emotions in information systems research
Scholars agree that people experience emotions and make judgements about any technology, and that these are essential factors for the adoption and consequent use of new technologies (Perlusz, 2004;Straub, 2009). Stam and Stanton (2010), for example, found that employees' responses to new technology are derived from their emotional experiences with the technology. Emotions are mental and experiential states directed towards an object (e.g. a technology, a person or an event). Some scholars advocate an inner view of emotions, where individuals develop emotions towards an object independently of other influences; others argue that social, political and cultural environments play a role in the constitution of emotions (Steinert and Roeser, 2020). They maintain that emotions cannot be fully Social presence The feeling of virtual togetherness or "being with another" (Biocca et al., 2003) and the ability to react to human queries, which is likely to be influenced by the anthropomorphic interface components   (Steinert and Roeser, 2020, p. 299). In this regard, Malin (2014) argues that "neither technology nor emotions exist in a vacuum" and that emotions relating to technology are shaped through public and media discourse and the celebrations or denunciations that surround the technology at the time (Malin, 2014, p. 11).
Emotions have evaluative, cognitive and motivational components (Mulligan and Scherer, 2012). Regarding the evaluative component, emotions involve, and are triggered by, at least one appraisal of the event, person or technology that they are directed towards. This appraisal links emotions to what people care about and value. Appraisals may differ from one person to another and from one technology to another. In IS, studies suggest that there are two types of appraisal: goal achievement and control over the expected outcomes Pinsonneault, 2005, 2010). The cognitive component of an emotion refers to the object of the emotion (given that emotions are directed towards something), while the motivational component identifies the action and consequence.
In addition, scholars recognise that emotions can be positive (e.g. contentment or happiness) or negative (e.g. anger or fear) (Laros and Steenkamp, 2005). They propose that positive emotions are as important as negative emotions in determining behaviour and that even though negative emotions can pose problems for the adoption of new technologies, positive emotions may counterbalance the undesirable consequences of negative effects including stress, technology anxiety or depression (Perlusz, 2004). In the domains of music, advertising and film, scholars identify that positive and negative emotions can be experienced simultaneously (Andrade and Cohen, 2007;Hunter et al., 2008;Steinert and Roeser, 2020). Research suggests that goal conflict and multiple values and appraisals are reasons for experiencing mixed emotions (Berrios et al., 2015). Scholars also argue that mixed emotions could be experienced by users in situations that are highly ambiguous in terms of the outcome, but as an event progresses and the ambiguity reduces, emotions get more clarified on either the positive or negative side (Folkman and Lazarus, 1985).
While the qualitative understanding of emotions is key to unravelling its sources, dynamics and outcomes, our scoping and critical review of the IS literature (Par e et al., 2015;Par e and Kitsiou, 2017;Tate et al., 2015), as detailed in Appendix 1, reveals that IS research disproportionately focuses on examining negative emotions (Agogo and Hess, 2018). For example, significant number of studies examine anxiety (D'Arcy et al., 2014;Srivastava et al., 2015;Venkatesh, 2000;Wang et al., 2017), fear (Boss et al., 2015;Johnston and Warkentin, 2010), stress (Agogo and Hess, 2018;Galluch et al., 2015;Pirkkalainen et al., 2019;Stich et al., 2019;Tarafdar et al., 2020), worry (Turel, 2016) and regret (Keil et al., 2018). While valuable, this bias towards negative emotions misses the opportunities to recognise the possibility of users experiencing positive emotions that drive their continuous use of technology (Qahri-Saremi and Turel, 2020;Stein et al., 2015). It also undermines the multifaceted aspects of emotions (Giaever, 2009) and the possibility of users experiencing mixed negative and positive emotions either simultaneously or consecutively.
A few IS studies maintain a balanced perspective that considers both negative and positive emotions. In this regard, Beaudry and Pinsonneault (2010) suggest that there are two types of appraisals: goal achievement and control over the expected outcomes Pinsonneault, 2005, 2010). Accordingly, they develop a framework to propose that one of four distinct types of emotions may be triggered when using a new technology: achievement, challenge, loss or deterrence. Achievement emotions are triggered by appraisals of opportunity and low control; challenge emotions are triggered by appraisals of opportunity and high control; loss emotions are triggered by appraisals of threat and low control and deterrence emotions are activated by appraisals of threat and high control. This framework was developed and tested quantitatively, with only one emotion selected to represent each category of emotions, and positive and negative emotions considered as opposite ends of a bipolar continuum of emotional experience. Although Beaudry and Pinsonneault's (2010) framework is mainly used in quantitative IS research, Stein et al. (2015) use it in a qualitative study. Their findings show that based on the two appraisals identified by Beaudry and Pinsonneault (2010), users experience mixed or ambivalent emotions, where they feel more than one emotion simultaneously.
Given that IS represents a broad category, and emotions are directed towards a particular subject or object, it is important to identify and qualify the technology under study. In this regard, Steinert and Roeser (2020, p. 303) stress that when studying emotions associated with a technology, "a distinction needs to be made between (1) characteristics of a technology as such, (2) the use of the technology, and (3) the characteristics of how the technology is implemented, like the decision-making process." Nevertheless, most of the IS research on emotions, as detailed in Appendix 1, treats the technology under study as either a token or a black box. In the token treatment, research considers emotions towards organisational use of IT in general, without specifying the type of technology. For example, Venkatesh (2000) examines the general use of computers at work, and Beaudry and Pinsonneault (2010) examine emotions relating to the use of IT in general in organisations, without considering the specifics of a technology or system. When a particular technology is examined, it is treated as a black box, where little attention is paid to the characteristics and features of the technology or its organisational context. For example, Stich et al. (2019) examine emotions relating to the use of email at work but do not specify the characteristics or features of the email system or the organisational context. Similarly, Chen et al. (2020) examine the impact of a customer relationship management system on employees' adoptive behaviour without addressing the specifics of the technology at hand or the emotions involved. By generalising IT use or treating it as a black box, opportunities to learn about new classes of technology in the workplace are missed and the role played by particular characteristics of technology is ignored.
As IS research on emotions has maintained a strong bias towards quantitative methods (Appendix 1- Figure A1), it mostly uses the technology acceptance model and its variations and hence limit its focus to the examination of intentions to use a particular technology. In this regard, Anderson and Agarwal (2011) quantitatively explore the impact of emotions on the individual's willingness to disclose their health information, and Johnston and Warkentin (2010) and Boss et al. (2015) examine the impact of fear appeals on users' information security intended behaviour. Furthermore, Wang et al. (2014) study the effect of emoticons on the acceptance of feedback in computer-mediated communications and Keil et al. (2018) study the impact of regret on whistle-blowing intentions regarding the privacy of health information.
Although the quantitative measurement of causes and consequences is valuable, it has been criticised for its inability to uncover the complexity and dynamism of emotions at work (Giaever and Smollan, 2015) and for ignoring the contextual and organisational aspects of emotions as an experience (Fineman, 2005;Giaever, 2009). In their critique of this approach, De Guinea and Markus (2009) urge researchers to go beyond technology acceptance models and intentions of using technology to study actual and continuous use and in particular, the impact of emotions on that use. A qualitative perspective could allow for an understanding of users' emotional experiences and consider the context of use. This is especially important in the case of AI chatbots: as explained in Section 2, this technology has distinct characteristics that cannot be ignored in an examination of users' emotions. Indeed, the personalised, intelligent and human-like behaviour of AI chatbots calls for an in-depth understanding of users' emotions (Moussawi et al., 2021). A qualitative approach may also reveal aspects of the context of a chatbot's configuration and use that could be missed by other approaches. Therefore, our research adopts a qualitative inductive approach to examining emotions related to AI chatbots in the workplace in order to keep an open and exploratory perspective that accounts for the technology and its context of use. AI-enabled chatbot and employees' emotions 4. Methodology 4.1 Research site Omilia (a pseudonym) is a large international organisation operating in the financial sector. It has developed an internal AI chatbot for its employees to provide services similar to an IT helpdesk. The chatbot provides a text-based conversational interface that allows employees to interact with (enquire about, search and use) a wide range of IT-related information and support that would ordinarily be provided by a traditional IT helpdesk. These services include, among others, changing a password, dealing with technical queries and searching for organisational forms, polices, processes, contact information and links to training. The chatbot was implemented with the objective of enabling users to be more self-sufficient and, in turn, improving the efficiency and cost-effectiveness of the IT helpdesk while providing a seamless experience for employees. The functionality of the AI chatbot was further extended to translation, reporting and other services. The chatbot was developed using the Microsoft Bot Framework and Microsoft Azure Cognitive Services. Implementing the cognitive services was initially challenging for the project team; however, they soon became adept at using it to expand the functionalities of the AI chatbot. In addition to using the cognitive services, which helped the chatbot to learn continuously from users' input, the team implemented supervised learning so they could review and approve the suggestions they received from the cognitive services. The process design of the AI chatbot allowed for the conversation to flow and for the bot to refer the users to a human operator (through automatic opening of a request ticket) when the conversation fails.

Methods and data collection
The study adopted an interpretive approach. This approach provided an in-depth understanding of participants' experiences by engaging with them in a natural setting (Creswell and Creswell, 2018). It was suitable for the exploratory nature of this research and the aim of examining employees' lived experience and gaining a deeper understanding of their emotions when using an AI chatbot in the workplace. Interpretivism is "based on a life-world ontology which argues that all observation is theory-and value-laden and that investigation of the social world is not, and cannot be, the pursuit of detached objective truth" (Leitch et al., 2010, p. 69). The adoption of an interpretive perspective to the study of emotion has long been advocated for gaining an in-depth understanding and in situ accounts of emotions "embedded in 'natural', or everyday, occurrences and events, expressed through participants' interactions, words, recollections . . . or other symbols of feeling or emotion" (Denzin, 2017;Fineman, 2005, p. 8;Kagan, 2007). In his detailed comparisons between the application of essentialist and interpretive approaches to studying emotions in the workplace, Fineman (2005, p. 14) concludes that interpretive approaches are "more likely to do justice to the nature of subjectivity [which is] a fundamental plank in any robust conceptualization of emotion". He also argues that the experiential realities of feeling and emotion may sometimes be difficult to express in a survey; hence, interpretive approaches "provide greater bandwidth for expressing emotion's ambiguities and forms" (Fineman, 2005, p. 14). This view resonates with Denzin's (1990Denzin's ( , 2017 argument that it is impossible to study emotions without having qualitative accounts expressed by the participants themselves. Denzin (1990) makes the strong statement that "Emotions must be studied as lived experience" and argues that "any interpretation of emotion must be judged by its ability to bring emotional experiences alive, and its ability to produce understanding of the experience that have been described" (Denzin, 1985(Denzin, , 1990. We began collecting data for this research in December 2019, after gaining access to the development team and the chatbot users and continued until April 2021. We collected the data through go-along interviews and document reviews. The go-along interview method combines interviews with observations, which allows a researcher to "walk through" an interviewee's experience and, in doing so, assess aspects of their lived experience in situ (Carpiano, 2009;Kusenbach, 2003). This was consistent with the phenomenological approach that we adopted to understand employees' emotional experiences in relation to using the chatbot (Denzin, 2017). Hence, we encouraged users to share their screens and exchange screenshots while conversing with us. The documents we reviewed included project documents, internal corporate newsletters, organisational announcements and emails, and internal links. We were granted access to these documents on the basis of confidentiality. These documents helped us understand the context of the chatbot implementation, its introduction and adoption and its technical features and development.
Given that the purpose of our study was not to test hypotheses, we inductively explored the emotions associated with the use of the chatbot in the workplace over three rounds of data collection and analysis, as explained in Section 4.3 and as shown in Table 2. We conducted 41 interviews (39 by online call and two by email) with unique participants over the three rounds. When we needed participants to clarify or expand on points made during the interviews, we followed up with emails and further communications. Due to COVID-19 restrictions, we used online meeting platforms. In the first round of interviews, we interviewed the project manager, the product manager and two software developers who were involved in the implementation of the chatbot. These interviews gave us an understanding of the chatbot's features, implementation decisions, technical aspects, organisational objectives, project plan and future vision. This was followed by two rounds of interviews with chatbot users to understand their lived experiences, how they used the chatbot in their day-to-day activities, how they experienced emotions and how their experiences impacted their continuous use. The users were randomly selected from different teams who agreed to participate in the study. Their ages ranged from the mid-twenties to the fifties, and they all had a university degree (see Appendix 2 for the participant demographics). The second round of interviews took place between July 2020 and September 2020, and the third round took place between March 2021 and April 2021.
The interviews were semi-structured and conversational in nature. The online interviews lasted from 20 minutes to 1 hour. We were familiar with the organisation and its employees through long-term engagement; hence, we were not complete strangers for the interviewees. As recommended by Myers and Newman (2007), the interviews were largely conversational, which helped the interviewees feel sufficiently at ease to express their feelings. Consistent with other researchers, we found the online interview format to reduce lengthy introductions and closures of the meeting and avoided typical office interruptions (Gray et al., 2020;Salmons, 2014). The personal connection over an online meeting platform provided an intimate space for interviewer-interviewee interaction, focused the attention of interviewees on the subject and allowed participants to openly share their experiences with us as researchers. Each individual interview lasted until a saturation point was reached, which avoided online meeting fatigue (Epstein, 2020;Toney et al., 2021).

Data analysis
This study is part of a wider research programme on the use of AI chatbots in organisations. All the interviews were audio-recorded and transcribed verbatim. To maintain the confidentiality of the organisation and the participants, we anonymised all the data. We conducted the data analysis and data collection in tandem, as widely recommended by inductive researchers, in order to benefit from the recursive iteration between empirical data and theoretical understanding (Gehman et al., 2018;Gioia et al., 2013;Urquhart et al., 2010). This reflects the hermeneutical circle, which is a foundational principle of interpretive research (Klein and Myers, 1999). We followed Grodal et al.'s (2020) Table 2. Data collection and analysis data collection and analysis. Table 2 shows the contribution of each round of data collection to the progress of the analysis and the development of analytical categories. The first round of interviews and document reviews oriented us to the chatbot's conception and purpose, organisational context and technical choices. In the second round of interviews and analysis, we generated initial categories of emotions. In the third round, we collected more data and conducted further analysis to expand, refine and add new categories and also link the categories to their underlying appraisals. Finally, we re-analysed the data to stabilise the categories and achieve theoretical integration. Regarding the inductive analysis, we followed Gioia et al.'s (2013) recommendations, as shown in the data structure in Figure 1. In the first order of the analysis, excerpts from the interviews served as a basis for identifying the emerging descriptive codes. In the second order of the analysis, we grouped the emotions experienced into 20 types of emotions. In the third order, we aggregated the types into four categories of emotions. We further examined the interactions between the categories and the underlying appraisals and identified mixed emotions as a driver for continuous use. As we remained sensitive and open to the data and possible new categories, we unravelled emotional connection as a new and uncovered wider repertoire of appraisals that influenced emotions beyond the two previously found in IS research. In all the categories, we identified the technological and organisational elements involved, as recommended by IS scholars (De Guinea and Markus, 2009;Orlikowski and Iacono, 2001;Sarker et al., 2019). To verify the reliability of the coding, qualitative inter-rater assessments were conducted at all stages of the coding (Campbell et al., 2013).

Research findings
This section focuses on the emotions experienced by users in relation to their chatbot use. It shows how these emotions were influenced by the AI chatbot's function and form characteristics and the users' the organisational context. Although the users experienced several different emotions or mixed emotions while using the chatbot, for the clarity of the analysis, we present them separately in the sub-sections that follow.

Connection emotions
Employees expressed emotions of empathy, forgiveness, compassion, fairness and kindness towards the chatbot which we labelled as connection emotions. These connection emotions surfaced when users appraised the AI chatbot on the basis of its social presence and anthropomorphic features. They experienced empathy when they observed the chatbot "trying to help" them find answers and solutions to their queries and assist them to "the best of its ability". Users noticed the little icon of the chatbot working and expressed "feeling sorry" for the chatbot when it did not return the right answers.
I'm not mad at the bot, I just feel sorry for the bot. It keeps trying. Interviewee 5 Users paid attention to the chatbot's anthropomorphic interface and the different messages reminding users that the chatbot is learning and asking them to teach it. They felt they were part of the interaction and should therefore share responsibility for the incorrect answers. The following quotes from users encapsulate this view: I mean, I have no frustration, but I thought maybe I was not typing the right way that it could give the information. Interviewee 8 Users were also considerate in their interactions with the chatbot and made efforts not to confuse it. They tried to interact with it in a simple way by typing more direct questions and avoiding long and complicated sentences. One user expressed this as follows: AI-enabled chatbot and employees' emotions I'm convinced that the more you write, the worse it is. I try to be concise, you know, not to confuse the bot. Interviewee 36 Furthermore, users appraised the organisational AI chatbot in comparison with similar commercial technology. In this regard, they observed the chatbot's functional limitations but tried to be "fair" and "kind" to it, given the limitations associated with the organisational context. The following quote demonstrates this: I don't expect it's going to come up to Alexa level. I know what she can do, and I know plenty of people who use Alexa a lot and the chatbot just seems very basic. Basic! Thats a good word for what I want to say. I was gonna say brittle but that seems unkind and possibly unfair to the bot.
Interviewee 35 In addition, employees expressed feelings of forgiveness and avoided blaming the chatbot for providing the wrong answers. They were tolerant of errors, which they felt were part of the interaction they had entered into "with the chatbot", and instead blamed their own behaviour.
The following quote provides an example of this view: I think because I go somewhere else to look for the information, I didn't want to let's say 'no it wasn't helpful' because maybe it was me not trying again and trying to rephrase it rather than 'OK, I'm gonna check this somewhere else or ask someone else'. [. . .] I think it could be a combination of both, it could be me that I didn't raise it correctly or the chatbot that wasn't useful. Interviewee 24 As users observed and appraised the chatbot's anthropomorphic features and social characteristics, they began to view it as an embodied entity. They find abandoning it to be inappropriate.
I think it's cute, the little thing, I think it's kind of cute. So no, I would definitely not do that [abandon using it], and you know it's a sort of simplistic little thing, but I think it's cute, the little eyes, so no issue with that. Interviewee 35 Users also paid attention to the chatbot interface representation and expressed preferences to change its name and icon for a more personalised experience. This is illustrated by the following quotes, in which the users discussed the appearance of the chatbot icon on their screen: Would like to have a background with flowers, change the face of the robot to whatever. [. . .] When you go to the online version from Outlook, you can personalise, you can have flowers in the background or you can have some Star Wars in the background . . . it is kind of also say 'OK, this is now My robot'. Interviewee 37

Contentment emotions
Users experienced satisfaction, happiness, pleasure and convenience when using the chatbot. These contentment emotions we experienced on the basis of various appraisals. One appraisal was evaluating the final outcome of the required task. However, these positive emotions were replaced by negative emotions of frustration when a task was not achieved. Another appraisal was based on the process of finding answers and getting closer to achieving a task. In this appraisal, users were satisfied when they find the chatbot helping them to achieve their task or progressing towards achieving it. The following quote from employees summarises this view. Users also appraised the chatbot based on convenience and in relation to alternative ways of finding answers to their query. In this regard, users compared to searching documents themselves or to finding time to speak to a person in the company. The following quote is an example of this appraisal.
I think it's just convenient to talk to a bot where you can just Skype [connect online], whereas when talking to a person you have to find time and then call the person and explain. I would still feel that explaining to a person is easier than to a chatbot, but . . . convenience is better with the bot. Interviewee 9 Users continued to experience these positive emotions even when the chatbot did not provide a direct answer to their query but only offered a set of options. In this regard, users appraised the functional design feature that gave them the opportunity to select from a range of options suggested by the chatbot as helpful for making progress towards achieving their task. In addition, the design feature that allowed the chatbot to open a ticket referring users to the IT helpdesk was evaluated as positive by most users; it was considered to bring them closer to achieving their goal, giving them a logical form of assistance in the process of finding information. One user expressed this view as follows: It didn't give me the information I wanted, but what I did like when it got to the end and it couldn't help me, it said 'do you want to open a ticket?' That aspect was very useful. Interviewee 12 Users satisfaction was also based on their appraising of the chatbot as a way to avoid the negative feelings arising from alternatives such as asking a colleague. They found the chatbot to be a discreet option for finding information on their own, avoiding the negative emotion of "embarrassment" that they experienced when they had to ask colleagues in the office. In this regard, users were also influenced by the chatbot's social presence, as many of them referred to the chatbot as "he". An example of this view is provided in the interview extract below: For sure he helped me to get access to a few of the tools, I found quite nice instructions for it. [. . .] For sure it is easier and clearer for me to find the things and not to ask people. Interviewee 39 New joiners were also "pleased" to have a way of obtaining information and finding answers to their questions without the "hassle" and "shame" of asking colleagues too many questions.
The following quotation eloquently expresses this view: It was smart enough I would say [. . .] I would say very useful tool, especially for new joiners. Instead of asking questions around, just go there and see if it helps or not. Interviewee 40

Amusement emotions
Users experienced excitement, curiosity, hope, anticipation, escapism and playfulness, which we categorised as amusement emotions. These emotions were experienced based on several appraisals of the AI chatbot including the novelty of the technology, the organisational and societal rhetoric of its progressiveness and revolutionary nature in addition to its ability to learn, stemmed from its machine learning function. These emotions were also experienced as users appraised the AI chatbot on the basis of its entertainment and escapism value, which stemmed from its social presence and conversational interaction. The organisational and societal rhetoric and the general hype about the revolutionary and progressive nature of AI had created positive symbolic associations that influenced users' appraisals of the AI chatbot. For users, the AI chatbot symbolised advancement and a new generation of workplaces and employees (Rafaeli and Vilnai-Yavetz, 2004). Hence, they appraised it as part of the modern workplace, the digital revolution and the office of the future, which they were enthusiastic about being involved in and keen to be part of. The following quotes summarise these views: I just knew that it was kind of AI and because it was an AI, I just wanted to try it out, because we didn't have anything like that before and that was what actually drove me to just use the chatbot. Interviewee 5 I am excited. I would recommend, definitely. Interviewee 34 The conversational and interactive features of the AI chatbot were a source of novelty and excitement. Users' feeling of the AI chatbot social presence and closeness were repetitively expressed as a source of excitement and curiosity: I was more like curious from my side and the fact that I'm a little bit interested in AI [. . .] Its more exciting to try discussing it with the chatbot. Interviewee 39 Although users experienced problems sometimes in using the chatbot in terms of delayed or incorrect answers, their appraisal of the chatbot as a learning agent led to a commitment to continue using it in the hope of making it better for themselves in the future. Indeed, the chatbot's unique design, characteristics and its clear description on the interface design as a learning agent, with messages such as "I'm learning", "can you teach me", "train me to help you", triggered users' anticipation that the chatbot would learn from being used, and this would lead to improvements in future. The following quotes reveal the anticipation and hope that drove this continued use: Well, I think it's fine. I mean, it's, I think, just a matter of time before it gets smarter and better. I mean the more training it has, it would definitely get better. Interviewee 10 I just think at least that one is getting better. It looks like he's learning with the user interaction. What I think I see now is that the context switching is working better. [. . .] then I saw that's working better, it's nice so there is definitely an improvement now. Interviewee 37 Other users appraised the chatbot's interactive characteristics and found it an opportunity for gaming and humour. The following quotes are from users who continuously used the chatbot, similar to a computer game, in an attempt to outperform it or play with it.
I was trying to minimise it and I switch to another page to see what will happen and such stuff, because I was a test engineer before for applications. So, I always try to crash it somehow. Interviewee 18 I gave it a try as soon as it was launched because I thought it was interesting. But yeah, it was just a funny thing to try, actually it gave me the answer. Interviewee 36 Users also appraised the chatbot as a tool for escapism and entertainment at work. They were anticipating additional customisation that would allow them to change its icon and instead use avatars and images that suit their mood on the day and also change its name to personalise, so it feels as part of their personal belongings at work. The following quote is an example of users' hope for close personalisation of name and icon that would increase the chatbot's closeness and value of escapism and entertainment.
I would prefer to make it personal. For example, I would give it my name not the name for everyone else. So, mine would be for example Eva or someones will be Matthew or something like that. [. . .] Exactly, as much customisation is there then it's better. For example, one day I would like to see an orc talking to me, next one the cyberwoman from the star Galactica or something like that [. . .] It would help me more for sure. It is something that is just fun and everyone from time to time has the time that he wants to spend on something not directly connected to the world. And for example, this is something nice. Interviewee 39 AI-enabled chatbot and employees' emotions

Frustration emotions
Users also experienced dissatisfaction, annoyance and frustration when using the AI chatbot. We categorised these emotions are frustration emotions. These emotions were experienced based on many appraisals including accomplishment of a task, alignment with the task, its intelligence and speed of leaning. Users experienced these emotions when the chatbot delayed the progression of a task; for example, when the suggested alternatives were not aligned with the task. While experiencing frustration and annoyance, these emotions were mixed with the connection and amusement emotions. Hence, users simultaneously appraised the AI chatbot as a creature that "is trying to help" and forgave its incorrect answers and even continued to use it in other searches and tasks. The connection and amusement emotions in these situations were stemmed from the interface design messages that had phrases such as "I am smart", "help me to help you", "train me to help you" and "I am learning, do you want to teach me?" This behaviour is different from the negative emotions experienced in the use of traditional systems where users feel anger and anxiety, so they either distance themselves from using the system, seeking social support or venting by confrontation, complaining or expressing negative feeling to others (Beaudry and Pinsonneault, 2010). The following quote represents the mixed emotions of frustration and connection: Not upset, maybe just a little bit frustrated, because you are already facing a problem and you want this to work and then if it doesn't work then again to have to go and raise a ticket, filling in, so it also takes up a lot of your time. So, it's a little frustrating but in the end, it's more frustrating because it takes up a lot of your time in trying to find the answer and then it doesn't. I mean if it could tell you from before that 'no, this is something that I can't answer', then it's much easier but of course it's robot so it tries to help you so it's not its fault, but yeah. It's not upsetting, it's just a little bit annoying sometimes. Interviewee 5 Users also felt disappointed when they appraised the AI chatbot against their expectation of AI technology and against their expectation of its learning process. Viewing it as an "intelligent agent", they wanted the chatbot to learn fast and progress to provide individualised personal assistance, where it can profile an individual's habit and communication style. This disappointment and annoyance were also mixed with faith in the technology stemmed from the positive hype surrounding it in the organisation and society and hope that the AI chatbot will improve and is capable of providing some useful assistance at work. One user eloquently expressed this disappointment as follows: But when I go back and ask that same question again in a kind of generic way, it still doesn't know what I'm after and I think by now if it was learning from what people were asking it would be able to answer the question because I'm asking something which to me is quite clear, How do I change my Skype profile and it does give me a general article or topic about entitlements for Skype which is in the right way, but I've said something specific. I want to change my entitlements, you know, and I think it could, I shouldn't have to drill in so much. Maybe it'll get me there in the end, I haven't tried while we're talking, but I think it should be able to do some of the work for me. Interviewee 35 Users' appraisals of the chatbot were also influenced by its social presence, underlying machine-learning technology and the message on the design interface that invited users to participate in training the chatbot. Accordingly, users felt they had to play an active role in training the AI chatbot to enable it to function. Specifically, they tried using different words and conversation styles to train the chatbot and improve it. The following quotes present examples of users' efforts to improve the chatbot's performance through continuous use: When I'm with a chatbot, I know this is a machine and the more I write, the more used it gets [the better it learns]. Interviewee 17 ITP My expectations were not very high, honestly speaking. So, but yeah, somehow, I also wanted to use it to also help the people who implemented it, because without usage it cannot learn more. Interviewee 36 Users' feeling of disappointment was also mixed by the contentment feeling of confidence stemmed from the organisational arrangement of continuing with the staffed IT helpdesk and the AI chatbot process design that refer users to helpdesk when conversations fail. This gave them sufficient reassurance that they could continue using the chatbot in the confidence that difficult tasks would be escalated and transferred to a helpdesk attendant. It is interesting that users spontaneously appraised the helpdesk staff as "human" alternatives to the chatbot and did not refer to them as colleagues, as employees or by name. The following quotes provide examples of this mixed emotion and tendency: So, that way [AI chatbot auto referral of users to a manned helpdesk], it is good and the only problem for me is that, you know, sometimes, it doesn't understand you and then it becomes frustrating. [. . .] Yes, when it's a complicated issue, maybe it's better, because the chatbot doesn't solve every issue so it's good to have the human aspect as well. Interviewee 9 As the disappointment emotions were mixed with connection and paradoxically contentment emotions, connection amusement, some users started to make cases of when to use the chatbot. The following quote from a user summarises this view.
I would say for the more common issues it's helpful but if the issue is very specific or not very common then you definitely need human help. Interviewee 10 While this behaviour allowed for continuous use, it might restrict the AI chatbot functionality in the long run. A few users experienced strong negative emotions which deterred them from continuing to use the AI chatbot. These users focussed on appraising the chatbot based on a single appraisal; however, their appraisal varies. They either appraised the AI chatbot against their high expectations of the technology, its direct achievement of the end task or against their objection of what they thought the organisation policy for its use. In these cases, users terminated their use of the chatbot entirely, not just for one task or one session but for all subsequent tasks. For example, the following employee stopped using the chatbot after his/her first attempt: Once I realised it didn't answer my question as I expected, I stopped using it. Because one negative experience, I think for such cases it prevents you from using it further. Interviewee 14 Another user expressed discontent in relation to the chatbot's logic and processes. They were frustrated from not being able to understand how it works and how it reaches results. This resulted in the user giving up with using the chatbot.
The chatbot was first saying 'Is it maybe one of these issues'? I said 'No'. Next, 'Is it maybe one of these issues'? 'No.' 'OK, let's create a ticket together.' And then you create a ticket and then the bot was also asking additional questions and then I didn't understand any more, I don't know, it's like, fed up with. Interviewee 11 These users also had resentment towards what they thought the organisation policy of the compulsory use of the AI chatbot. They objected its use as a gateway to access the IT helpdesk and find it frustrating to be compelled to do so. 6. Discussion and contribution This study focused on the use of AI chatbots in a digitally enhanced workplace. It aimed to answer the questions of how and why employees experience emotions when using an AI chatbot and how these emotions impact its use, through qualitative data collection and an inductive exploratory analysis. The findings show that employees had mixed emotional response to AI and that their emotions were underpinned by a range of appraisals. The mixed emotions included different types of connection, amusement, contentment and frustration emotions. The findings reveal that users' appraisals of AI chatbot varied on the basis of the AI chatbot form and function in addition to corporate policy, organisational and societal rhetoric surrounding the AI technology in general, and users' familiarity with similar commercial technology. The multiple appraisals led to multiple emotions arising (Berrios et al., 2015). These emotions were mixed enforcing and/or offsetting each other which allowed for continuous use. When the appraisal was fixated on one aspect that triggered negative emotions, whether it is disappointment of performance of the AI chatbot, its process design or resentment of corporate policy, employees terminated their use of the AI chatbot. The study unravels the emergence of a distinctive range of connection emotions that arise in employees' use of AI chatbots. These findings expand research on emotions in technology use, as explained in the following sections.
6.1 AI chatbots: a repertoire of appraisals and mixed emotions Unlike traditional organisational IS, which users have been reported to appraise on the basis of task completion and control over results Pinsonneault, 2005, 2010;Stein et al., 2015), the AI chatbot in our research was appraised in relation to its design features, including its underlying machine-learning technology, its social presence and its anthropomorphic features. The functional characteristic of machine learning was also translated through the interface design into an anthropomorphic feature, with users perceiving the chatbot as a learning creature and feeling responsible for teaching it. The social presence of the chatbot, including its name, its icon and the interface notes that emphasised these features, also influenced users' appraisals. Users' appraisals of the chatbot were also influenced by the novelty of the technology and the surrounding organisational and societal rhetoric and hype. In addition, employees appraised the chatbot by comparing it with similar commercial systems and alternative options and the negative feelings these systems generated. Even when appraising the technology on the basis of how it dealt with a task, users considered not only the final accomplishment of the task but also the progression towards the goal of completing the task. This wide repertoire of appraisals gave rise to a range of mixed emotions, where emotions of connection, amusement, contentment and frustration were felt simultaneously offsetting and/or enforcing each other and encouraging users to continue using the chatbot tolerating its mistakes and in the hope that it will improve. This finding confirms Steinert and Roeser's (2020) assertion that it is important to consider multiple emotional experiences in the use of technology. Indeed, in our study, only a few users appraised the chatbot solely on the basis of the final accomplishment of a task.
While defining four categories of emotions arising from the use AI chatbot, the study also reveals the multiple emotions in each category. This extends previous research (e.g. Pinsonneault, 2005, 2010) which tended to examine only one emotion per category.
Revealing the multiple emotions arising in each category paves the way to further research to explore and measure the strength and dynamics of emotions within a category.
Our study shows that users experienced the contentment emotions of satisfaction, happiness, pleasure and convenience. However, users continued to experience these positive emotions even when the chatbot did not give them a direct answer in response to their query. Its process design allowed it to satisfy the users' goal by automatically opening a ticket to refer users to the IT helpdesk, which made users feel supported and encouraged them to continue using the chatbot. Users also experienced amusement emotions, including excitement, curiosity, hope, anticipation and playfulness. The experiences of excitement and curiosity were influenced by the interactive conversational form of the technology, its novelty, and the progressive organisational and societal rhetoric surrounding it.
Although machine learning is essentially a functional characteristic of the AI chatbot, employees perceived it as a human-like form of learning. The emotions felt by employees towards the AI chatbot as a learning agent mixed and blurred what to designers are straightforward, separate features of form and function. Hence, in many cases they felt an obligation to teach the chatbot. They also felt hopeful that the more they used it, the more the AI chatbot would learn and improve. These mixed emotions propelled users to continue their use, despite their frustration. Some users felt playful when using the AI chatbot and liked to test it and outperform it on occasions. Therefore, interestingly, the users' playfulness provided not only continuous use but multiple cases for the AI chatbot to learn from. In addition, the organisational symbolic narrative of AI as bringing about an advanced nextgeneration workplace brought infused curiosity and positive emotions of progress and achievements.
A few users experienced solely negative emotions that compelled them to terminate the AI chatbot use from the first trial. These users appraised the chatbot against their high expectations of the AI technology and considered it another enterprise tool that hindered their task achievement. They were influenced by a popular rhetoric that AI technology can fully automate tasks and completely replace human interventions. They also had strong opposition to what they thought to be a company policy of compulsory use of the AI chatbot as a first source and a bridge to contact a human operator. They find a compulsory use policy to be counterproductive and unnecessarily lengthy for their needs. The experience of sole emotions of frustration led users to terminate their use of the chatbot from the first use and not to try using it again. This is in line with the findings of previous studies that people tend to avoid stressors as part of their emotion regulation (De Castella et al., 2013;Folkman and Lazarus, 1985).
These findings show the value of understanding employees' emotional experiences when they are actually using technology, while also considering the specific type of technology and its context (De Guinea and Markus, 2009;Sarker et al., 2019).

AI chatbots and connection emotions
Our research identified connection emotions as a new class of emotions experienced when using an AI chatbot. These emotions connected users to the AI chatbot and have empathy in common. They are related to the distinctive characteristics of AI chatbots. The findings show that the AI chatbot's unique functional and form characteristics influenced employees' experiences of connection emotions. The conversational characteristic and its virtual embodiment instilled a feeling of flow, where users enjoyed interacting with it (Poser and Bittner, 2020). The social presence of the chatbot created a feeling of human-like interaction that helped users to bypass negative emotions. Combined with the function and underlying machine-learning characteristics, employees felt that they play an active role in teaching the AI-enabled chatbot and employees' emotions chatbot and improving its future functionality. They considered themselves to be partly responsible for the poor results and hence maintained their tolerant, forgiving behaviour towards the chatbot, which made it possible for them to continue using it. These findings show that AI technology can give rise to categories of emotions that differ from those brought about by traditional organisational IS.

Implications, limitations and further research
This research contributes to the IS literature in the following ways. First, unlike most of the extant IS research on emotions, which does not focus on a particular system or technology, this study attaches importance to the type of technology and focuses specifically on an AI chatbot. It takes into account its unique characteristics and considers users' emotional experiences in their use of it in their workplace context. Therefore, our findings extend the literature on emotions in IS use by opening up the black box of the technology being studied and relating the emotions experienced to its form and function characteristics and the organisational and social context. In doing so, it responds to scholarly calls to bridge the chasm between emotions and use and between organisational context and technological features (Leonardi, 2009). Second, the study contributes to the nascent understanding of AI technology in organisations Gkinko and Elbanna, 2020b). It identifies four categories of emotions experienced in the use of the AI chatbot including the distinctive connection emotion as a new category that is distinctly related to AI chatbots. Connection emotions include empathy, forgiveness, tolerance, fairness and closeness and are triggered by the social presence of the chatbot and its anthropomorphic characteristics. This finding shows that it is fruitful to identify emotions inductively, rather than to rely on pre-identified emotions and to closely consider the technology under study and not to treat it as a blackbox (Sarker et al., 2019). This expands IS research on emotions beyond the current focus on negative emotions to consider different types of emotions associated with specific features of technology, the organisational context and involved tasks. Third, the study provides a qualitative interpretive understanding of the emotions experienced by employees. In doing so, it contributes to overcoming the dominance of essentialist assumptions and positivist research as explained in Section 2 and presented in Figure A1 in Appendix 1 (McGrath, 2006). Fourth, the study expands IS research on emotions by going beyond the dominant focus on intention to use and examining the actual use of a specific technology in its organisational setting. Consequently, it responds to calls to focus on the actual use of IS in their organisational setting (De Guinea and Markus, 2009). The study contributes to practice. It informs managers and designers by providing useful direction on users' emotions in relation to the design characteristics of the AI chatbot including process design, interface design, social presence and anthropomorphic features and also in relation to organisational context and other technologies. It shows that the social presence and anthropomorphic features of an AI chatbot can engage employees emotionally offsetting the negative emotions of frustration and allowing users to be more tolerant to mistakes and willing to continue using it. Therefore, when designing and implementing an AI chatbot in the workplace it is fruitful to pay more attention to formthe design features, such as social presencebecause users create an emotional bond with these. Encouraging amusement emotions of excitement, playfulness, curiosity, hope and escapism can also enforce contentment and connection emotions and reduce the impact of the emotions of frustration on use. This is different from the case of traditional IS, where function receives most of the organisational attention. Regarding the machine-learning component of AI technology, although this is related to function, in the case of an AI chatbot, employees perceive machine learning as human-like learning; hence, they demonstrate a commitment to using the system to help it learn and improve. Practitioners could tap into this commitment in their promotion of AI chatbot use in their organisation. Our study highlights the importance of empathy, which stems from the chatbot's form, and its impact on continued use. Therefore, practitioners could support the infusion of empathy emotions to encourage users to be more tolerant of a chatbot's functional limitations and overcome the negative emotions that arise from these limitations.
In conclusion, the study shows that it is fruitful to consider the form and function of a technology and its organisational context when studying emotions in the use of IS. Because this study follows the interpretive research tradition, the findings cannot be generalised on population level, but it can generalise on theory Baskerville, 2003, 2012). This theoretical generalisation is important for advancing our understanding, and it is also important for practice, as discussed above. Theoretical generalisability to emotions in the use of this class of intelligent conversational technology in the workplace could open up opportunities for further research in this area. For statistical generalisability, future research can adopt the findings to formulate and quantitatively test the propositions in this study. This study focussed on text-based AI chatbots; future research can consider emotions in relation to voice-based AI chatbots and embodied robotics. In terms of limitations, the study adopts a solely qualitative approach to understanding employees' emotional experience in using an AI chatbot. We encourage future research to adopt a mixed-methods approach to provide a statistically viable understanding of the phenomenon without losing the value of the richness of qualitative approaches. We hope this study paves the way for more research on the use of AI applications in the workplace. Scoping Literature Review The aims of our literature review were to uncover how Information Systems (IS) researchers account for emotions and to identify the theoretical and methodological issues that require research attention. To this end, we adopted a scoping and critical approach to the literature review (Par e et al., 2015;Par e and Kitsiou, 2017;Tate et al., 2015). Hence, we focused on reviewing articles that represent the IS field in nine leading journals. We selected the Journal of Information Technology and People because its published mission is to understand the implications of IT for people in society and in their daily work in organisations (Information Technology and People, 2021). We then selected eight journals identified by the Association of Information Systems as representative of the IS field (Fitzgerald et al., 2019;Sarker et al., 2019). These were the European Journal of Information Systems, the Information Systems Journal, Information Systems Research, the Journal of the Association for Information Systems, the Journal of Information Technology, the Journal of Management Information Systems, the Journal of Strategic Information Systems and MIS Quarterly. Our search covered publications from 1982 to 2021 and returned 134 articles. Following the recommendations made by Par e et al. (2015), we set our inclusion criteria as: (1) include articles that report on an empirical study; and (2) include articles that examine an organisational context. The first criterion excluded research commentaries, literature reviews and agenda-setting papers; the second criterion excluded research on public and consumer use of Twitter, Facebook, blogs, Wikis and other social media, in addition to excluding research on consumer use of websites and e-commerce services. This resulted in 43 papers that were both empirical and organisational (i.e. examining emotions related to IS in organisations) being selected for analysis. Our analysis of these articles covered the research method, the study of IS/IT, the identification of a specific IS, the nature of IS use and the types of emotions studied. The review showed that the IS literature on emotions in organisations has maintained a strong bias towards quantitative research, negative emotions and intentions to use, and the majority of studies treated IS and IT as either a token or a black box, as depicted in the figures below.