Robots or frontline employees? Exploring customers’ attributions of responsibility and stability after service failure or success

Daniel Belanche (Department of Marketing and Market Research, University of Zaragoza, Zaragoza, Spain)
Luis V. Casaló (Department of Marketing and Market Research, University of Zaragoza, Zaragoza, Spain)
Carlos Flavián (Department of Marketing and Market Research, University of Zaragoza, Zaragoza, Spain)
Jeroen Schepers (Eindhoven University of Technology, Eindhoven, The Netherlands)

Journal of Service Management

ISSN: 1757-5818

Article publication date: 12 August 2020

Issue publication date: 24 September 2020

14350

Abstract

Purpose

Service robots are taking over the organizational frontline. Despite a recent surge in studies on this topic, extant works are predominantly conceptual in nature. The purpose of this paper is to provide valuable empirical insights by building on the attribution theory.

Design/methodology/approach

Two vignette-based experimental studies were employed. Data were collected from US respondents who were randomly assigned to scenarios focusing on a hotel’s reception service and restaurant’s waiter service.

Findings

Results indicate that respondents make stronger attributions of responsibility for the service performance toward humans than toward robots, especially when a service failure occurs. Customers thus attribute responsibility to the firm rather than the frontline robot. Interestingly, the perceived stability of the performance is greater when the service is conducted by a robot than by an employee. This implies that customers expect employees to shape up after a poor service encounter but expect little improvement in robots’ performance over time.

Practical implications

Robots are perceived to be more representative of a firm than employees. To avoid harmful customer attributions, service providers should clearly communicate to customers that frontline robots pack sophisticated analytical, rather than simple mechanical, artificial intelligence technology that explicitly learns from service failures.

Originality/value

Customer responses to frontline robots have remained largely unexplored. This paper is the first to explore the attributions that customers make when they experience robots in the frontline.

Keywords

Citation

Belanche, D., Casaló, L.V., Flavián, C. and Schepers, J. (2020), "Robots or frontline employees? Exploring customers’ attributions of responsibility and stability after service failure or success", Journal of Service Management, Vol. 31 No. 2, pp. 267-289. https://doi.org/10.1108/JOSM-05-2019-0156

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Daniel Belanche, Luis V. Casalo, Carlos Flavian and Jeroen Schepers

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode.


Introduction

Robots are replacing humans in more and more jobs. For example, more than 100,000 automated agents in the “robot army” are preparing packages for service delivery in Amazon’s warehouses around the world (Heater, 2019), and in the financial sector, robot-advisors manage investments worth over $860,000 million (Statista, 2019). The deployment of robots is now particularly intense for mechanical and analytical service jobs but is expected to spread to jobs requiring intuitive and even empathetic skills in the upcoming years (Huang and Rust, 2018). Leading this development are innovative organizations, which are starting to use robots rather than employees in frontline services. For instance, LoweBot guides customers through Lowe stores (Rafaeli et al., 2017), the Nao Robot assists clients of the Bank of Tokyo in issues ranging from opening a bank account to credit card loss (Byford, 2015), and robot waiters seem to have taken off at some restaurants in China (Nguyen, 2016).

These developments have also sparked considerable academic interest. Several scholars have provided highly citable frameworks and typologies of technology in the frontline and outlined future research priorities (Bolton et al., 2018; De Keyser et al., 2019; Larivière et al., 2017; Marinova et al., 2017; Rafaeli et al., 2017). Others have focused on the role of frontline robots and their artificial intelligence (AI) and machine learning capabilities (Belanche et al., 2020; Huang and Rust, 2018; Van Doorn et al., 2017; Wirtz et al., 2018). Frontline robots are defined as “autonomous and adaptable interfaces that interact, communicate and deliver service to an organization's customer” (Wirtz et al., 2018, p. 909). Some of them are capable of autonomous decision-making, adapting to situations, learning from previous service encounters, and make customers feel that they are in the presence of another social entity (Van Doorn et al., 2017).

In spite of the recent interest in frontline robots, much remains to be explored. First, although many theoretical predictions exist about how replacing frontline employees with service robots may affect customers' experiences, empirical evidence is virtually non-existent (see, e.g. Mende et al., 2019 for an exception). Second, emerging research in service journals merely acknowledges the role of technological advances in general instead of studying a specific technology in an applied setting; this has limited the identification of concrete implications for service management (Kunz et al., 2019). Third, “it is assumed that service robots will perform well […] and, therefore, will not hinder adoption” (Wirtz et al., 2018, p. 915). Although the adoption of frontline robots may be smooth because of intuitive and adaptive user interfaces, a flawless operation (i.e., error-free service delivery) may prove utopian. Service failure has been a topic of particular interest in previous work on service technology (e.g., Dabholkar and Spaid, 2012) but is yet to be studied in a frontline robot context. Finally, not only from a marketing perspective (De Keyser et al., 2019) but also from an ethical (Johnson, 2015; Matthias, 2004) and legal point-of-view (Ji, 2017), it is important to know how responsibility can be ascribed to the actions of autonomously operating and learning frontline robots. Customers can distinguish between employee and firm when attributing credit and blame for service actions (Hess et al., 2007), but to what extent do customers hold a frontline robot accountable for its performance?

This article aims to better understand attributions that customers make following a successful or unsuccessful service encounter with a frontline robot. Attribution is the result of a process in which customers seek causal explanations for a service performance; they want to understand why things happened to control and predict their environment (Weiner, 2000). Following attribution theory (Tsiros et al., 2004; Weiner, 2000), a distinction is made between attributions of responsibility (i.e., perceptions about who or what caused the outcome) and attributions of outcome stability (i.e., perceptions about the degree of permanence of the outcome; Iglesias, 2009). This paper sets out to compare customer attributions in service encounters where the agent is a human (employee) to those in encounters where the agent is a frontline robot. This is especially important in service pseudorelationships, in which a customer typically interacts with different frontline agents across encounters, such as in airlines, restaurants, or hotels (Hess et al., 2007). The impersonality of alternating contacts challenges customers to infer whether a service outcome should be attributed to an individual agent or an entire organization. The customer-firm relationship is fortified when customers attribute successful service outcomes (i.e., error-free, reliable, and need-fulfilling service encounters) to the firm’s responsibility and infer that these outcomes are stable. In turn, unsuccessful (or failure) service outcomes (i.e., service encounters with glitches, unexpected negative events, or not living up to promises) are preferably attributed to the agent and deemed unstable (cf., Casado and Más, 2002; Hess et al., 2007; Swanson and Davis, 2003).

This research employs two experimental studies that represent an exploratory investigation into such attributions. The results of Study 1 show that, compared to frontline employees, frontline robots increase attributions of firm responsibility and outcome stability. Unfortunately for managers, these attributions are stronger in the case of service failure than in the case of service success. Study 2 further details these findings. Building on Huang and Rust's (2018) categorization of AI skills, the results suggest that firms can alleviate the unfavorable stability attribution in failed service encounters by explaining to customers the nature of AI technology (i.e., mechanical or analytical) in frontline robots. Overall, this article suggests that a customer-robot service encounter represents an even more important “moment-of-truth” than the traditional customer-employee service encounter. Our research contributes to a better understanding of how AI will reshape service encounters, affect customer experiences, and provide a challenge to human resources management and marketing.

Next, the literature review section provides detailed background about customers’ attributions in service encounters, extending the current knowledge to the new automated agents. Following the development of hypotheses, the methodology and the results derived from the experimental studies are explained. The manuscript closes by discussing the main implications for scholars and practitioners and outlining some possibilities for future research.

Literature review

Customer’s attributions

Attribution theory is rooted in social psychology works by Heider (1958) and Kelley (1973) about how people make causal explanations related to the question “why did this occur?.” The inferred explanations are based on the evaluation of the event and the conditions around it, together with the available information, beliefs and motivations to judge it (Agapi, 2017). By means of attributions, individuals are able to better understand the surrounding world to predict and control future events. Accordingly, previous literature has already proven the impact of customer attributions on satisfaction (Oliver and DeSarbo, 1998; Weiner, 2000), loyalty and positive word-of-mouth (Weiner, 2000; Swanson and Davis, 2003). Such outcomes are especially salient in the case of service failures (Iglesias, 2009; Dabholkar and Spaid, 2012), which is why the majority of work has focused on attributions following such unsuccessful service encounters (for a review see Van Vaerenbergh et al., 2014).

Weiner (1979, 1986) developed a formal structure of attribution theory, which described three dimensions that individuals consider when making attributions: locus of causality, controllability and stability. Locus of causality refers to who or what is the main cause of success or failure. Controllability indicates whether or not the turn of events could have been modified by the liable subject (Gernigon and Delloye, 2003). Given the high correlation between locus of causality and controllability, more recent studies merge these two dimensions into a single one: responsibility (e.g., Tsiros et al., 2004). Stability refers to the degree of permanence that is attributed to the perceived cause of the failure (Iglesias, 2009). When the likelihood of a cause is perceived as fixed or stable (Swanson and Kelley, 2001), the customer predicts that future outcomes be similar to the previous one (Gernigon and Delloye, 2003). In other words, stability indicates the extent to which an outcome is likely to happen again.

Customers’ attributions do not only consider the firm as the responsible entity, but also the frontline employees as the agents providing the service (Bitner, 1990; Hess et al., 2007). Customers spontaneously and explicitly judge employees’ effort and abilities during the service provision, both for positive and negative outcomes (Specht et al., 2007). In the last years, the field of customer attributions in service management has been broadened to include attributions toward technology (Meuter et al., 2000; Bitner et al., 2000), and particularly toward self-service technology (SST; Zhu et al., 2013). However, findings in SST settings may not fully generalize to frontline robots.

Robots versus SST

It is important to note the conceptual distinctions between SST and robot service. SST entails “technological interfaces that enable customers to produce a service independent of direct service employee involvement,” such as ATMs or automated hotel checkout (Meuter et al., 2000, p. 50). However, user input or instructions are needed. Compared to SST, a robot in services can perform tasks autonomously (like an employee does) without direct human instruction (Colby and Parasuraman, 2016). The customer thus plays a more passive role (Tussyadiah and Park, 2018). Another key distinction between SST and robots is the fact that the former is not usually involved in physical tasks (Broadbent et al., 2009). Robots can perform physical tasks (e.g., driving, housekeeping, serving in a restaurant), and increasingly, intellectual tasks (e.g., financial investment advice). Their AI sophistication enables aspects such as basic contextual interaction or logical thinking (Huang and Rust, 2018). From a customer point of view, the most relevant distinction between frontline technologies is that robots engage with customers on a social level, whereas previous technologies lack this capacity (Van Doorn et al., 2017). As a result, customers perceive that robots act as new agents performing the task in the service industry (Castelo, 2019).

Robots versus human employees

According to Krämer et al. (2012), human-human interaction differs from human-robot interaction because the former entails (1) social perspective-taking (i.e., understanding the feelings, thoughts and motivations of others), (2) common ground (i.e., the sum of mutual, common, or joint knowledge, beliefs, and suppositions, Clark, 1992, p. 93), (3) exchanging one’s knowledge with others, and (4) assuming the other’s mind (i.e., ability to see other entities as intentional agents, whose behavior is influenced by states, beliefs, and desires; Carruthers and Smith, 1996). Although human-robot interaction cannot be categorized as having a social nature, robotic agents try to mimic social characteristics (e.g., talking instead of beeping) to facilitate parasocial interaction (Krämer et al., 2012).

Another distinctive feature is feelings of empathy, a complex phenomenon in which several basic human abilities (e.g., affect, social perspective-taking) and assumptions (e.g., understanding of other’s motivations and goals) are needed. Empathy works bidirectionally; customers are also able to be emphatic with employees and understand the specific situation (e.g., an employee’s personal situation of grief or misfortune) or even feel compassion after a worker transgression (Tsarenko et al., 2019). This may occur regardless of employees’ skill levels. In contrast, robots are introduced to increase standardization and maintain a constant service experience for the customer, but lack empathy (Kumar et al., 2019; Wirtz et al., 2018). This makes it harder for customers to sympathize with poor performance.

Hypotheses development

Attribution of responsibility

Bitner et al. (1990) found that a large proportion of (dis)satisfactory encounters with service personnel was attributed to unprompted and unsolicited employee actions. Frontline employees tend to adjust their effort toward each customer individually, such as spending more or less time with a customer, which induces heterogeneity or nonstandardization in service encounters (Lovelock and Gummesson, 2004). Although customers are aware that service firms have the means in place to regulate employee behavior (Hess et al., 2007), the employee mood or attitude determines customers’ perceptions of a service encounter to a large extent (e.g., Hennig-Thurau, 2004; Wilder et al., 2014). Such variation across employees may even generate loyalty to the specific frontline employees, rather than the firm they represent (Palmatier et al., 2007). Customers also believe that employees sometimes offer white lies or excuses to create a favorable impression (Weiner, 2000). Thus, in a service encounter between an employee and a customer, the employee is usually perceived as the principal responsible entity for the favorable or unfavorable performance (Swanson and Davis, 2003).

In contrast to the variable performance of frontline employees, robots are free from human fatigue, moods, and short-lived attitudes (Huang and Rust, 2018). As such, frontline robots behave identically across a service delivery system, providing predictable and homogeneous service interactions and solutions (Wirtz et al., 2018). Customers may therefore realize that the behavior of a robot is to a great extent, determined before it enters the frontline, by designing and programming activities. Indeed, at the present stage of development, robots can hardly be ascribed any significant degree of moral autonomy (Huang and Rust, 2018). Also, findings in other domains suggest that customers often do not regard technology or mechanical aspects of a service as a primary cause of an outcome. For instance, when evaluating a car repair service, customers assume that quality varies because of the workmanship involved (e.g., experience, time, dedication) as opposed to the unvarying mechanical input (Weiner, 2000). Along this line, frontline robots may be perceived as tangible aspects of service delivery, which shifts the responsibility to the firm or its management (Weiner, 2000).

In sum, it is expected that customers attribute responsibility for a service outcome differently when the agent is a human being or a frontline robot. Note that attributions to the firm and the agent are not mutually exclusive because customers can attribute responsibility to both entities at the same time. Attributing more responsibility to the agent does not have to limit the attribution of responsibility to the firm. Therefore, the first two hypotheses are:

H1.

Customers attribute less responsibility for a service outcome to the agent when the agent is a frontline robot than when the agent is a frontline employee.

H2.

Customers attribute more responsibility for a service outcome to the firm when the agent is a frontline robot than when the agent is a frontline employee.

Attribution of stability

Literature suggests that customers make stronger attributions of stability to technology-related causes than to employee-related causes (Iglesias, 2009). For instance, Casado and Más (2002) found that passengers consider mechanical problems (i.e., technology-related) as a more permanent cause of a delay in their flight than mistakes by airline personnel. Customers perceive technology infused in services to be the result of standardization, and therefore evaluate its performance similar to mass-produced products (cf., Weiner, 2000). Indeed, the functionality in frontline robots results from a long development process in which requirements have been specified, programming has been conducted, pilot tests have been executed, and field deployment and optimization have taken place (Joyeux and Albiez, 2011). Changing functionalities or solving failures in service technology may thus take a substantial amount of time.

In contrast, the quality of interpersonal contact in service encounters may change depending on the employee’s characteristics in each encounter (i.e., effort, empathy, attitude). Especially in pseudorelationships, employee’s attitudes and behaviors may not only differ within one employee across encounters but also between employees (Hess et al., 2007). Customers thus account for the fact that interactions with employees (rather than with technology) may be different from encounter to encounter. Moreover, customers typically underestimate the potential impact of environmental factors that have caused an employee’s performance and assume that an employee could have easily behaved differently than he/she did in the last service encounter (Albrecht et al., 2016; Choi and Mattila, 2008).

In sum, customers are likely to attribute a service outcome resulting from the interaction with a frontline employee as less likely to reoccur than the service outcome resulting from the interaction with a frontline robot. Formally:

H3.

Customers attribute more stability for a service outcome when the agent is a frontline robot than when the agent is a frontline employee.

Moderating effect of service outcome

Customers’ attributional search may occur following all kinds of events but is especially strong following negative events (Van Vaerenbergh et al., 2014). Individuals are more sensitive to failed than to successful frontline performances because of the innate differential affective responses to losses compared to gains (Smith et al., 1999). Customers engage in causal thinking when failures occur to prevent themselves from experiencing such an uncomfortable event again (Dabholkar and Spaid, 2012). The ability to assign the blame to an actor or entity serves as a coping mechanism for customers to deal with frustration, stress, and even anger following a service failure (Gelbrich, 2010).

In contrast, in the case of successful outcomes, customers likely feel that the robot has fulfilled its role in the service co-creation well. They feel little anxiety and have fewer reasons to engage in an extensive search for the cause of the service outcome. As a result, the relationship between outcomes and attributions of responsibility is likely stronger following less successful than following more successful outcomes (Coffee and Rees, 2008).

It is, therefore, hypothesized that the differences in customers’ attributions of responsibility as posited in H1 and H2 would be stronger in the case of service failure than in the case of service success. Specifically:

H4a.

The difference in customers’ attribution of responsibility for a service outcome to the agent (as proposed in H1) is greater in case of service failure than in case of service success.

H4b.

The difference in customers’ attribution of responsibility for a service outcome to the firm (as proposed in H2) is greater in case of service failure than in case of service success.

Kaipainen et al. (2018) find that customer responses to a robot are typically a mix of excitement (e.g., people experience delight, wonder, and curiosity) and disappointment (e.g., people experience a low level of control and limited robot abilities). In other words, customers realize that in the current stage of technological development, the outcome of service encounters with frontline robots is not yet predictable or stable. However, because of inertia to radical innovations, customers may think that unsuccessful outcomes of a frontline robot are more likely to reoccur than successful ones. Especially following a service failure, individuals’ negative emotions facilitate a dramatization of events (Gelbrich, 2010). As such, customers may exaggerate the stability of the cause of a failure thinking that their initial skepticism toward the robot was right (cf., De Keyser et al., 2019) and that much development is still needed to ensure failure-free human-like service encounters (Huang and Rust, 2018). In other words, stability attributions toward frontline robots are likely to be higher in the case of a service failure than in the case of a successful service encounter.

In contrast, customers expect firms to have recruited employees through quality procedures, trained them properly, and provided them with service guidelines and regulations (Parasuraman et al., 1985). A customer’s one-time experience of a service encounter is, therefore, indicative of the quality of the service that the firm can deliver and the customer may expect in future (Hess et al., 2007). Thus, stability attributions toward frontline employees are likely to be more invariant between failed and successful service encounters. It is, therefore, proposed that the difference between stability attributions in successful and unsuccessful encounters may be more pronounced for frontline robots than for frontline employees. Formally:

H4c.

The difference in customers’ attribution of stability for a service outcome when the agent is a frontline robot compared to a frontline employee (as proposed in H3) is greater in case of service failure than in case of service success.

For the sake of completeness, the direct effects of service outcome (failure vs. success) on the three customer attributions are also considered. Figure 1 depicts the research model.

Study 1

Method

For testing the research hypotheses, a 2 (frontline agent: human employee vs. robot) × 2 (service outcome: failure vs. success) between-subjects experimental design was developed. Research hypotheses were tested with data collected from 331 US participants who were recruited through a market research agency and randomly assigned to each of the four scenarios. Of all respondents, 36.9% was younger than 35 years of age, 49.4% was 35–54 years, 13.7% was older than 54 years, and 28.4% was male. For assuring that respondents had some affinity with frontline robots, the scale developed by Parasuraman and Colby (2015), [1] was used to assess respondents’ level of technology innovativeness (α = 0.91). Participants reported an intermediate level of technological innovativeness, with a mean of 4.15 (SD = 1.57) on a 7-point scale. The distribution of responses across scenarios is well-balanced (n > 80 in each scenario), exceeding in all cases, the minimum of 25 observations per cell (e.g., Seltman, 2012).

The research scenarios focused on the hospitality sector. Previous research has considered the hospitality sector as a prototypical example of employee-based frontline service (Chan and Tung, 2019; Kuo et al., 2017), while robots are starting to be implemented to attend customers at several hotels worldwide. For example, Connie is a robot concierge introduced by Hilton, able to inform guests about nearby places of interest or give information about the hotel.

First, all participants were asked to read a general description of the scenario accompanied by a picture of the agent in the hotel setting. The picture of the agent was either a human employee or a robot (see Appendix 1 for a complete description of the scenarios). The Pepper robot was employed as the most frequent, standard humanoid already used in previous research on this sector (Tussyadiah et al., 2019). Each participant then presented a vignette describing a situation in which the interaction with the agent either ends in a success or a failure. Success and failure were manipulated following Smith et al. (1999) and Hess et al. (2007).

A pre-test with 104 participants was conducted to evaluate the appropriateness of the scenarios, as well as their realism; the scale from Bagozzi et al. (2016) was employed, which consists of two seven-point items (“The scenario is realistic,” “The scenario is believable”). A third, additional question was added: “How likely would you be to encounter a situation similar to the one described in the scenario?” (from 1 = very unlikely, to 7 = very likely). The results confirmed the suitability of the scenarios since the scale (α = 0.78) provided a mean of 5.33 (SD = 1.25), indicating that participants perceived the scenarios as realistic and believable (Bagozzi et al., 2016).

Another group of 154 participants specifically evaluated the realism of robots being introduced in hotels by two new scales. By using items similar to the pre-test ones, results replicated the previous findings for robot scenarios realism, now yielding a mean of 5.27 (SD = 1.29). Another three questions checked participants’ perceptions about the realism of the tasks performed by the hotel service robots, “The tasks described in the information are easily performed by a robot nowadays,” “Robots are able to perform the receptionist's tasks mentioned in the text,” and “How likely would be that a robot like the one described in the text would really exist nowadays? (from 1 = very unlikely to 7 = very likely).” Again the results confirmed the suitability of the scenario since the scale (α = 0.85) provided a mean of 5.08 (SD = 1.37).

Measurement scales and validation

Participants’ were first asked to assess the responsibility of both the hotel and the frontline agent (the employee or the robot) in the failure or success of the service. Specifically, using scales from previous literature (e.g., Kim and Smith, 2005; Russell, 1982), participants had to evaluate on 5-point semantic differential scales which entity was responsible for the service performance. The two items were anchored by “the hotel” (1)…“other causes” (5) and “something about the hotel” (1)… “something about other causes” (5). Following the same scale, participants assessed the extent to which they attribute the responsibility to the agent (robot or employee) or to other causes [2]. In addition, two items (“It is likely that a similar situation could occur again at this hotel,” “It is likely to experience this type of service performance in the future at this hotel”) borrowed from Kim and Smith (2005) were used to measure attributions of stability. These items used 7-point Likert scales (1 = strongly disagree, 7 = strongly agree).

Satisfactory levels of reliability were obtained in all cases: attribution to the hotel (α = 0.76), attribution to the agent (α = 0.71), and perceived stability (α = 0.93).

Manipulation checks

Participants’ perceptions regarding the service outcome (success or failure) were measured using the item: “How would you describe the performance of the service encounter?.” Respondents answered on a 5-point semantic differential scale, ranging from “Service failure” (1) to “Service success” (5). An independent samples t-test confirmed that performance was perceived to be significantly better in the success condition than in the failure one (MFAILURE = 1.30 [S.D. = 0.63], MSUCCESS = 4.77 [S.D. = 0.59], t = −52.12, p < 0.01). In addition, to check the appropriateness of the manipulation regarding the frontline agent, respondents were asked to indicate who provided the service, a human or a robot; all participants identified the agent in their scenario correctly.

Results

A multivariate analysis of variance (MANOVA) was employed to evaluate the effect of each independent variable (agent and service outcome) on all dependents (attributions of responsibility and stability). The results of the MANOVAs revealed significant effects for human vs. robot (Wilks’ λ = 0.89; F (3, 324) = 12.94, p < 0.01), failure vs. success (Wilks’ λ = 0.93; F (3, 324) = 7.77, p < 0.01), and for their interaction (Wilks’ λ = 0.96; F (3, 324) = 4.80, p < 0.01). Next, separate univariate ANOVAs were conducted to test the specific effects of independent variables on each dependent one.

Regarding the customer’s attribution of the agent’s responsibility, this attribution is greater when the service is conducted by an employee (M = 4.01; S.D. = 1.10) than by a robot (M = 3.47; S.D. = 1.38). This difference is significant (F = 16.33; p < 0.01), which supports H1. Similarly, the customer’s attribution of responsibility to the agent providing the service is greater for participants exposed to a success (M = 3.95; S.D. = 1.14) than for the ones exposed to a failure (M = 3.53; S.D. = 1.36). This difference is again significant (F = 10.46; p < 0.01). In addition, Table 1 and Figure 2A indicate an interaction effect between both variables. Specifically, the difference in customer’s attribution of the agent’s responsibility—depending on whether the frontline employee providing the service is a human or a robot—is greater when there is a service failure. Hence, H4a is also supported.

By turning to the customer’s attribution of the hotel’s responsibility, this attribution is greater when the service is conducted by a robot (M = 4.35; S.D. = 0.90) than by an employee (M = 4.12; S.D. = 1.03). The difference is significant (F = 4.68; p < 0.05), hence supporting H2. However, there is no significant difference (F = 0.34; p > 0.1) comparing the conditions of service failure (M = 4.26; S.D. = 0.98) or success (M = 4.20; S.D. = 0.97). Similarly, Table 1 and Figure 2B show that the interaction effect between agent and service outcome is not significant toward attribution of responsibility to the firm so that H4b is not supported.

Finally, regarding the customer’s attribution of stability of the situation, perceived stability is greater when the service is conducted by a robot (M = 6.30; S.D. = 1.12) than by an employee (M = 6.03; S.D. = 1.21). This difference is significant (F = 4.51; p < 0.05), hence supporting H3. Participants exposed to successful service encounter report higher stability (M = 6.38; S.D. = 1.11) than the ones exposed to a failure (M = 5.94; S.D. = 1.19). This difference is again significant (F = 12.03; p < 0.01). In addition, there is an interaction effect between both variables, as can be seen in Table 1 and Figure 3, supporting H4c. Specifically, the difference in customer’s stability attributions to employees and their stability attributions to robots is greater when in the case of service failure than in the case of service success.

To better understand the interaction effect, we assessed the strength of associations (ω2 (Hays, 1963)). For customers’ attribution of agent responsibility, ω2 is greater for the failure condition (0.10) than for the success condition (0.01), and the effect of human vs. robot is only significant when there is a service failure (FFAILURE = 17.80; p < 0.01; FSUCCESS = 1.67; p > 0.1). Again, for customers’ attribution of the hotel’s responsibility, ω2 is greater for the failure condition (0.05) than for the success condition (0.01), and the effect of human vs. robot is only significant for the service failure condition (FFAILURE = 3.97; p < 0.05; FSUCCESS = 1.12; p > 0.1).

Similarly, for customer’s attribution of stability, the effect of human vs. robot is only significant when there is a service failure (FFAILURE = 8.48; p < 0.01; FSUCCESS = 0.00; p > 0.1), and ω2 is much higher for the failure condition (0.05) than for success (0.00). In sum, the effects of the agent providing the service (human vs robot) are only significant for the service failure condition and the strength of association can be considered close to a medium association (ω2 = 0.06; Kirk, 2007). In turn, for service success, these effects are nonsignificant and the strength of the association is very small (ω2 < 0.01). Figures 2 and 3 visually confirm these findings.

Study 2

For managers, a particularly worrying outcome of Study 1 is that a failure in a service provided by a robot leads to high customer attributions of firm responsibility and outcome stability. The purpose of Study 2 is twofold: (1) to replicate these particular findings of Study 1, and (2) to explore how firms can alleviate these detrimental consequences of frontline robot introduction.

Study 2 considers Huang and Rust's (2018) categorization of AI skills in services. They specify four levels of intelligence in service tasks: mechanical, analytical, intuitive, and empathetic intelligence. Their expectation is that robots will replace employees in jobs related to each of these skills, starting with mechanical tasks and gradually replacing human agents in more sophisticated tasks as robots develop further. This encroaching robot job replacement depends on the type of intelligence needed for a task, not the extent to which frontline employees are skillful in fulfilling a task. In other words, when robots have the intelligence to take over a task, they may replace both lowly and highly skilled staff. According to Wirtz et al. (2018), robots involving the first levels of AI intelligence will develop in the near future and will become the dominant service delivery mechanism. This study thus focuses on mechanical and analytical robots, reflecting the two first levels of intelligence.

Robots with mechanical AI skills automatically perform simple, standardized, repetitive routine tasks that require precision, consistency, and efficiency (Huang and Rust, 2018). This mechanical intelligence relies on observations to (re)act repetitively and has a minimal degree of learning or adaptation. Following the same reasoning as in the previous hypotheses, it is posited that when customers perceive that frontline robots are merely used to conduct a service task in a repetitive, mechanical fashion, they will see the performance of such robots as highly dependent on the quality of the firm’s process of designing, programming, and launching the robot (Joyeux and Albiez, 2011; Wirtz et al., 2018). This provides a large contrast with the more idiosyncratic, adaptable, and heterogeneous behavior of frontline employees (Lovelock and Gummesson, 2004). When mechanical robots produce a failure, customers may thus make high attributions of responsibility and stability to the firm.

In contrast, robots with analytical AI skills autonomously process data for problem-solving to learn from actions and outcomes in past service encounters (Sternberg, 2005). This intelligence is required for performing complex analytical tasks involving information processing and logical reasoning, such that the robot is able to advance from a default option to an adaptive service offering (Krämer et al., 2012; Wilder et al., 2014). Analytical robots can thus be categorized as having a sophisticated level of AI, approaching skills, traditionally attributed exclusively to human employees. These learning automata may be perceived by customers as not totally under the control of the firm, thus reducing the level of responsibility that can be ascribed to the firm (Hellström, 2013; Johnson, 2015). In addition, robots’ ability to learn from previous mistakes provides customers with the hope that robots will “shape up” in future encounters such that failure may be perceived as transitory and less stable.

In sum, significant differences are expected between mechanical robots and employees, and between mechanical robots and analytical robots. However, there is not likely to be a difference between analytical robots and human employees. It is, therefore, hypothesized:

H5.

Following a service failure with a frontline mechanical robot, customers attribute (a) more responsibility to the firm, and (b) more stability compared to when the agent is a frontline employee.

H6.

Following a service failure with a frontline analytical robot, customers attribute (a) less responsibility to the firm and (b) less stability compared to when the agent is a frontline mechanical robot.

Method

A second experiment was conducted to compare customer attributions in a failed service scenario involving a mechanical robot, an analytical robot, and a frontline employee. Participants of this second study were 229 US participants recruited in a similar fashion as the first study. Of the respondents, 49.8% was younger than 35 years of age, 34.1% was 35–54 years, 16.1% was older than 54 years, and 55.5% was male. Again, the participants of the study presented intermediate levels of technology innovativeness (Cronbach’s α = 0.89, M = 4.08, SD = 1.51) on a 7-point scale. Participants were randomly assigned to each of three scenarios, involving the mechanical robot (N = 74), analytical robot (N = 79), or employee (N = 76).

To test the robustness of the findings in Study 1, a different type of frontline service was examined: the frontline agent in the vignette performed waiter tasks in a restaurant (e.g., taking and delivering orders, suggesting meals). Previous research usually considered restaurants as a prototypical setting for frontline employee studies (Liao and Chuang, 2004; Kong and Jogaratnam, 2007).

All participants were invited to read a general description of the scenario together with a picture of the waiter. The same picture was used in the mechanical and analytical robot condition. Specifically, a picture of the HZX robot was presented because it already provides service in Bangladesh restaurants (Rahman, 2017). Following this general section, a specific description of the service failure in the restaurant was presented. Compared to the frontline employee condition, two specific sentences were added in the robot conditions to manipulate the robot’s AI intelligence (i.e., mechanical or analytical robots). Appendix 2 presents a complete description of the scenarios.

A pre-test with 58 participants served to evaluate the realism and suitability of the failure scenarios in the restaurant, using the measure proposed by Bagozzi et al. (2016). The results confirmed the suitability of the scenarios since the realism scale (α = 0.86) provided a mean of 4.59 (SD = 1.54), indicating that participants perceived the scenarios as realistic and believable.

Again, another group of 154 participants evaluated the realism and appropriateness of the tasks performed by the waiter robots. As in Study 1, realism replicated the pre-test result (α = 0.84, M = 4.93, SD = 1.40) and task suitability (α = 0.75, M = 4.54, SD = 1.42).

Measurement scales and validation

The same measures employed in Study 1 were used to evaluate the responsibility of both the restaurant and the agent (employee or robot), as well as the attributions of stability. The only change made was replacing the word “hotel” with “restaurant” in the items. Again, we find acceptable scale reliabilities; attribution to the restaurant (α = 0.68), attribution to the agent (α = 0.77), and perceived stability (α = 0.82).

Manipulation checks

Using a 5-point semantic differential scale, ranging from “Service failure” (1) to “Service success” (5), respondents correctly identified the scenarios as representing a service failure (M = 1.72, SD = 0.76). In addition, a new scale was developed to measure the mechanical or analytical AI nature of the robot. The items closely followed the terminology used in the descriptions by Huang and Rust (2018, p. 157). Respondents had to describe the robot’s behavioral base on a 4-item, 5-point semantic differential scale consisting of: “Basic/Sophisticated Artificial Intelligence,” “Mechanical/Analytical skills,” “Based on repetition/Analyzing and adapting” and “Low/High learning capability.” The scale obtained a satisfactory level of reliability (α = 0.95) and an independent samples t-test showed that participants’ responses presented significant differences between both conditions (MMECHANICAL_ROBOT = 1.61 [S.D. = 0.70], MANALYTICAL_ROBOT = 3.88 [S.D. = 1.14], t = −14.89, p < 0.01).

Results

Three independent ANOVA procedures were conducted to provide an initial evaluation of the effects of frontline agent type on each of the three attributions (see Table 2). The outcomes indicate that agent type significantly affected customer attributions of agent responsibility (F = 2.91, p < 0.10) and stability (F = 11.50, p < 0.01), but the overall influence on firm responsibility does not reach significance (F = 2.18, p > 0.10) (see Figures 4 and 5).

T-tests then compared pairs of agent types in relation to each dependent variable. Customer attributions of agent responsibility are higher when the waiter is a frontline employee than when it is an analytical robot (MHUMAN = 4.30 [S.D. = 0.74], MANALYTICAL_ROBOT = 3.89 [S.D. = 1.21], t = 2.51, p < 0.05) or a mechanical robot (MHUMAN = 4.30 [S.D. = 0.74], MMECHANICAL_ ROBOT = 4.01 [S.D. = 1.17], t = 1.76, p < 0.10), although the latter effect is not as strong as the former. The two robot conditions do not significantly differ in terms of agent responsibility (MANALYTICAL_ ROBOT = 3.89 [S.D. = 1.21], MMECHANICAL_ ROBOT = 4.01 [S.D. = 1.17], t = −0.63, p > 0.10). Jointly, these findings replicate the results of Study 1 that, in a service failure context, customers attribute more responsibility to a human agent than a robot agent (MHUMAN = 4.30 [S.D. = 0.74], MROBOT = 3.95 [S.D. = 1.19], t = 2.31, p < 0.05).

Results also revealed that customer’s attributions of responsibility to firm are higher when they are served by a mechanical robot than by a human (MMECHANICAL_ ROBOT = 3.99 [S.D. = 1.03], MHUMAN = 3.63 [S.D. = 1.04], t = 2.13, p < 0.05). This supports H5a. However, H6a is not supported because the level of responsibility attributed to the firm in the analytical robot condition did not differ significantly from that in the mechanical robot condition (MANALYTICAL_ ROBOT = 3.84 [S.D. = 1.12], MMECHANICAL_ ROBOT = 3.99 [S.D. = 1.03], t = −0.83, p > 0.10). Similarly, results suggest that the attribution of firm responsibility in the analytical robot condition also did not differ from the frontline employee condition (MANALYTICAL_ ROBOT = 3.84 [S.D. = 1.12], MHUMAN = 3.63 [S.D. = 1.04], t = 1.24, p > 0.10). Similar to Study 1, these findings jointly suggest that, in a service failure context, customers attribute more responsibility to the firm when the agent is a robot than when the agent is a human (MROBOT = 3.92 [S.D. = 1.08], MHUMAN = 3.63 [S.D. = 1.04], t = 1.91, p < 0.10).

Finally, customers attribute higher stability of the failure outcome when the service is provided by a mechanical robot than when it is provided by a human agent (MMECHANICAL_ ROBOT = 6.25 [S.D. = 0.93], MHUMAN = 5.43 [S.D. = 1.10], t = 4.68, p < 0.01), which supports H5b. In addition, the stability attribution for the analytical robot is lower than for the mechanical one (MANALYTICAL_ ROBOT = 5.94 [S.D. = 1.02], MMECHANICAL_ ROBOT = 6.25 [S.D. = 0.93], t = −1.90, p < 0.10) but higher than for the employee (MANALYTICAL_ ROBOT = 5.94 [S.D. = 1.02], MHUMAN = 5.43 [S.D. = 1.10], t = 2.88, p < 0.01). The former effect provides borderline support for H6b, while the latter effect is contrary to expectations. Apparently, analytical robots are not in the same league as human employees yet. Finally, these findings replicate the results of Study 1 that, in a service failure context, customers perceive a higher stability of the failure outcome when the agent is a robot than when the agent is a human (MROBOT = 6.09 [S.D. = 0.99], MHUMAN = 5.43 [S.D. = 1.10], t = 4.44, p < 0.01).

Discussion

Now that AI has developed, such that it can perform tasks normally requiring human intelligence, robots are replacing humans in many jobs across sectors. This transition is expected to gradually affect the majority of service firms as the robots move to the frontline of organizations (Huang and Rust, 2018), thus reshaping services and the way they are managed. Automated agents constitute an entirely new field of research within the domain of service technology (Van Doorn et al., 2017; Belanche et al., 2019). To date, the extant literature provides little insight into how frontline robots may affect customers’ experiences. Studies acknowledge the role of technological advances and offer multiple typologies but do not identify any concrete implications (Kunz et al., 2019). By focusing on customers’ attributions, this current research addresses the current and future impact of AI and robotics in the specific context of service management. Specifically, our work adds valuable theoretical and empirical knowledge to several research streams.

The major concern of the responsibility of automated agents’ outcomes has been recognized in different fields (e.g. legal, Ji, 2017; ethical, Johnson, 2015), but generally has been disregarded in marketing. Only De Keyser et al. (2019) speculate on whether firms should be willing to fully control and be responsible for the robot’s interface. This paper contributes to the current works by highlighting the role of customer attributions and is empirical rather than conceptual on nature. Results reveal that, compared to human employees, frontline robots reduce customer perceptions of agent responsibility, specifically in case of service failure. In turn, the firm’s responsibility for the outcome is higher when the service is provided by a robot than when it is provided by a human employee. These findings, taken together, suggest that robots are perceived to be more representative of a firm than employees. This echoes and extends findings in the domain of sales reps, where customers may develop different levels of loyalty toward a salesperson than toward the company they represent (Palmatier et al., 2007).

This research further contributes to the literature on customer attribution in service encounters because previous works have distinguished between attribution of events to employees, technology, and the firm (Dabholkar and Spaid, 2012; Hess et al., 2007; Iglesias, 2009) but the three loci of attributions have not been examined and contrasted in one study. This effort is undertaken in this paper, at the same time, including the attribution of stability. Contrasting these three loci is especially important given the hybrid constellations of humans and technology in which future services will be delivered (De Keyser et al., 2019; Larivière et al., 2017; Wirtz et al., 2018). Results show that perceived stability is higher when the service is conducted by a robot than by an employee and that this difference increases in the case of service failure (compared to service success). This result corresponds with insights that customers perceive a greater variation in the performance of frontline employees compared to a technology-based agent (cf., Lovelock and Gummeson, 2004). Overall, the findings suggest that frontline robot performance will play an even more important role in shaping the customer-provider relationship than customer-employee service encounters.

The second study explored ways for firms to prevent the potentially negative attributions of customers in the case of frontline robot failures. This research thus contributes to recent theoretical insight on AI intelligence (Huang and Rust, 2018). The results of this second study corroborate the findings of Study 1 and further show that mechanical robots, but not analytical robots, increase customer’s perceptions of firm responsibility compared to human agents. Hence, convincing customers that a frontline robot has analytical skills may prevent much of the detrimental responsibility attributions. However, even autonomously operating and learning robots are still perceived to be more consistent in their (failed) service behavior than human agents. Perhaps the introduction of sophisticated technology by the firm is perceived by customers as a sign of the firm’s investment to enhance the service (Nijssen et al., 2016). They may forgive the firm for robot failures during an initial adaptation period, with the expectation that a machine learning process will result in a future service provided free of mistakes (Huang and Rust, 2018).

Toward the future, the literature on job replacement indicates that robots tend to be good at specific cognitive skills demanded by the market, such as information systems management, language, or analytical skills (Beblavý et al., 2016). Robots are also acquiring some almost human cognitive abilities such as the use of memory, rationality (i.e. logic), or planning skills (Castelo, 2019). However, human employees are better in non-cognitive skills, that is, affective, social and personal abilities. Most of these skills are derived from innate human characteristics such as interpersonal warmth, individuality or self-perception, depth of thought, and cognitive openness (e.g. curiosity, creativity) (Castelo et al., 2019). Humans are also unique in their personal experiences, which rely on both affective abilities (e.g. feelings of desire and fear, pride and embarrassment, joy; Castelo et al., 2019) and personal skills (responsibility, reliability, flexibility, independence, pleasant demeanor; Beblavý et al., 2016). Scholars and managers should, therefore, consider that some customers have a strong need for social interaction, while other customers prefer to avoid social situations with employees and the mere presence of other customers (e.g., King et al., 2006). Attributions and service experiences in a frontline setting featuring robots may depend on such individual customer traits, too. On the other hand, as robots will develop different kinds of AI with time (i.e., mechanical, analytical, intuitive, empathetic, Huang and Rust, 2018), customers would get used to deal with frontline robots embedding these levels of intelligence. The conceptual framework offered in this paper may provide a good basis for further exploring these effects.

Managerial implications

The introduction of robots in frontline services represents a great challenge for managers, especially when the option of human-operated service is no longer available to customers. This section provides several concrete suggestions to deal with these challenges.

The introduction of automated frontline agents performing the tasks traditionally carried out by an employee involves advantages and disadvantages for service providers. On the one hand, customers identify robots as closer representatives and hold them responsible for their customer experience. In successful service encounters, firms should take advantage of this attributional pattern by stressing to customers that the decision of implementing the technology was a conscious and customer-focused one, thereby pointing to the advantages of robot service for customers. Customers will perceive this as a commitment of the company to enhance customer value through the introduction of flawlessly operating innovations. On the other hand, customers perceive the outcome of failed automated service encounters as stable and blame firms for it. Firms should not try to avoid the responsibility but rather the specific robot failure in future encounters. Obviously, careful testing of the robot before the introduction is paramount, as are measures like programming the robot to prevent or bypass the more frequent mistakes and reviewing its performance on a regular basis. Nevertheless, it is difficult to ensure a free-of-failure service. Another useful strategy at this stage of robot development, will be offering human assistance to customers proactively following a robot failure or reactively when customers indicate that they are uncomfortable or stuck with the robot (Huang and Rust, 2018).

Apart from the organizational and technical responses to failed robot service encounters, managers also have powerful marketing communication at their dispense in such situations. Merely informing customers by means of advertising or in-place messages about the learning capability of the robot will help resolve some of the negative attributions made by customers. Specifically, customers feel that a failure of an analytical robot is less permanent than that of a mechanical robot and may realize that it could just have been an initial setback to later experience improved service because of sophisticated machine learning processes.

Limitations and future research

As is typical for academic endeavors, this work has some limitations that may open opportunities for further research. First, companies may not always introduce technologies in their service interactions with customers to improve customer service, but often have cost-cutting motives (e.g., technology may allow to save on employee wages; Nijssen et al., 2016). In that sense, the use of robots may in itself irritate customers (Mende et al., 2019). The question then becomes whether customers would choose the robot performed service over a human performed service. Future research may thus consider strategies for companies to introduce robots in the frontline, for instance by building on insights from studies on forced channel migration (e.g., Cortiñas et al., 2019; Trampe et al., 2014) or mandatory technology adoption (Reinders et al., 2015). Second, although robots are bound to replace jobs that require specific skills, rather than individual lowly skilled workers, customers may perceive that robots resemble low-skill workers and adjust their attributions accordingly. Future work may, therefore, distinguish between customers’ attributions of responsibility toward high- and low-skilled workers.

Furthermore, this research has considered two specific and prototypical humanoid robots (i.e., Pepper and HZX). The scenarios extended the robots’ abilities compared to the current market situation of these robots. In this way, respondents were able to compare these robots with a human employee. Future studies may whether the comparison with humans differs across robots and their looks and features. For instance, one could differentiate between mechanoids (non-anthropomorphic mechanical-like robots), humanoids (anthropomorphic mechanical-like robots), and androids (highly human appearance) (Walters et al., 2008).

This research focused on a central task in the hospitality sector (i.e., the check-in, order taking and serving). For generalizing results, it could be useful to analyze situations in which frontline robots perform different tasks. Finally, future studies may focus on the field rather than lab studies, so that customers may have some experience to assess the robots’ abilities more precisely. In sum, marketing research should continue to advance on this topic to better understand how to deal with the increasing use of frontline robots.

Figures

Study 1 conceptual model

Figure 1

Study 1 conceptual model

Visualization of agent and service outcome interaction panel

Figure 2

Visualization of agent and service outcome interaction panel

Visualization of agent and service outcome interaction for stability attribution

Figure 3

Visualization of agent and service outcome interaction for stability attribution

Attributions of agent’s responsibility and firm’s responsibility for the service failure for each kind of agent

Figure 4

Attributions of agent’s responsibility and firm’s responsibility for the service failure for each kind of agent

Attributions of stability of the service failure for each kind of agent

Figure 5

Attributions of stability of the service failure for each kind of agent

Interaction effects on customer’s attribution of agent’s responsibility, firm’s responsibility, and stability

Dependent variableOutcomeRobotHumanF-ScorepResult
Agent’s responsibilitySuccess3.84 (S.D. = 1.25)4.07 (S.D. = 1.03)5.49<0.05**H4a Supported
Failure3.09 (S.D. = 1.43)3.95 (S.D. = 1.16)
Firm’s responsibilitySuccess4.28 (S.D. = 0.93)4.13 (S.D. = 1.00)0.47>0.1 (n.s.)H4b Not Supported
Failure4.42 (S.D. = 0.87)4.12 (S.D. = 1.07)
StabilitySuccess6.38 (S.D. = 1.12)6.38 (S.D. = 1.11)4.51<0.05**H4c Supported
Failure6.21 (S.D. = 1.12)5.68 (S.D. = 1.21)

Note: (**) significant at 95%; (n.s.) non-significant

Descriptive statistics and overall effect of kind of agent on attributions of agent responsibility, firm’s responsibility and stability of the service failure

Dependent variableMechanical robotAnalytical robotHumanF-Scorep
Agent responsibilitya4.01 (S.D. = 1.17)3.89 (S.D. = 1.21)4.30 (S.D. = 0.74)2.91<0.10*
Firm responsibilitya3.99 (S.D. = 1.03)3.84 (S.D. = 1.12)3.63 (S.D. = 1.04)2.18>0.10 (n.s.)
Stability attributionb6.25 (S.D. = 0.93)5.94 (S.D. = 1.02)5.43 (S.D. = 1.10)11.50<0.01***

Note(s): a 5-point scale, b 7-point scale, *** significant at 99%, * significant at 90%, (n.s.) non-significant

Notes

1.

The copyrighted scale was used after obtaining writing permission form the authors.

2.

To facilitate results presentation and interpretation, these scales where later reversed, such that higher values in the scale indicate higher levels of firm/agent responsibility.

Appendix 1 Study 1 scenario

[General description]

You are traveling on an important business trip. You arrive at the hotel and go to the front desk to check-in. At the front desk, a frontline [employee/robot] will serve you. You put your bags down at the counter right in front of the [employee/robot] who is working on the terminal.

[Successful outcome]

The frontline [employee/robot] acknowledges your presence immediately by saying, “Hello, how can I help you?.” You show your booking confirmation. After a quick information verification process, the frontline [employee/robot] informs you that your room is ready, gives you the card to access your room, and shows you the way to the elevator. When you get to your room, you find that it is exactly the type of room that you booked.

[Failure outcome]

You wait patiently for half a minute. The frontline [employee/robot] still has not acknowledged your presence. You then say: “excuse me” to get the [employee/robot]’s attention. However, the [employee/robot] does not respond immediately but with some lag. After waiting for a long time for the information verification process to complete, the [employee/robot] informs you that your room is ready, gives you the card to access your room, and shows you the way to the elevator. When you get to your room, you find that the card does not open the door. The room is occupied by another guest.

Appendix 2 Study 2 scenario

[General description]

Consider a real and well-known mid-class restaurant in your city. You decide to go there to have dinner with two friends.

[Failure outcome]

When you are at the restaurant, you notice that you are going to be served by a [waiter/robot].

You wait patiently for a minute, but the [waiter/robot] still has not acknowledged your presence. You then say: “excuse me” to get the [waiter’s/robot’s] attention. However, [he / the robot] does not respond immediately but only following a short lag. When making the order, you ask about the Italian wines served at the restaurant, but the [waiter/robot] suggests a popular German Riesling. Upon delivering the food at your table, the [waiter/robot] forgot your burger and switches the meals ordered by your friends. Finally, the [waiter/robot] does not say goodbye when you leave the restaurant.

[Mechanical robot]

The next day you read an article in the local newspaper about the new robot-waiter. The manager of the restaurant explains in an interview that the robot’s behavior is based on relatively simple technology, that is, basic fixed programmed scripts that ensure precise and mechanical repetitions of its service actions.

[Analytical robot]

The next day you read an article in the local newspaper about the new robot-waiter. The manager of the restaurant explains in an interview that the robot’s behavior is based on sophisticated artificial intelligence that analyzes service encounters logically and rationally, such that service actions can be learned and adapted over time.

References

Agapi, A. (2017), “Customers' Reactions to Self-Service Technology Failure: Attributions of Blame and Coping Strategies”, Master's Thesis, Aalto University.

Albrecht, C.M., Hattula, S., Bornemann, T. and Hoyer, W.D. (2016), “Customer response to interactional service experience: the role of interaction environment”, Journal of Service Management, Vol. 27 No. 5, pp. 704-729.

Bagozzi, R.P., Belanche, D., Casaló, L.V. and Flavián, C. (2016), “The role of anticipated emotions on purchase intentions”, Psychology and Marketing, Vol. 33 No. 8, pp. 629-645.

Beblavý, M., Mýtna Kureková, L. and Haita, C. (2016), “The surprisingly exclusive nature of medium-and low-skilled jobs: evidence from a Slovak job portal”, Personnel Review, Vol. 45 No. 2, pp. 255-273.

Belanche, D., Casaló, L.V. and Flavián, C. (2019), “Artificial Intelligence in FinTech: understanding robo-advisors adoption among customers”, Industrial Management and Data Systems, Vol. 119 No. 7, pp. 1411-1430.

Belanche, D., Casaló, L.V., Flavián, C. and Schepers, J. (2020). “Service robot implementation: a theoretical framework and research agenda”, The Service Industries Journal, Vol. 40 Nos. 3-4, pp. 203-225.

Bitner, M.J., Booms, B.H. and Tetreault, M.S. (1990), “The service encounter: diagnosing favorable and unfavorable incidents”, Journal of Marketing, Vol. 54 No. 1, pp. 71-84.

Bitner, M.J., Brown, S.W. and Meuter, M.L. (2000), “Technology infusion in service encounters”, Journal of the Academy of Marketing Science, Vol. 28 No. 1, pp. 138-149.

Bitner, M.J. (1990), “Evaluating service encounters: the effects of physical surroundings and employee responses”, Journal of Marketing, Vol. 54, No 2, pp. 69-82.

Bolton, R.N., McColl-Kennedy, J.R., Cheung, L., Gallan, A., Orsingher, C., Witell, L. and Zaki, M. (2018), “Customer experience challenges: bringing together digital, physical and social realms”, Journal of Service Management, Vol. 29 No. 5, pp. 776-808.

Broadbent, E., Stafford, R. and MacDonald, B. (2009), “Acceptance of healthcare robots for the older population: review and future directions”, International Journal of Social Robotics, Vol. 1 No. 4, pp. 1-319.

Byford, S. (2015), “My bank is now staffed by a helpful robot”, the Verge, Apr 21, 2015. available at: https://www.theverge.com/2015/4/21/8460841/nao-robot-ufj-bank-japan (accessed 29 May 2019).

Carruthers, P. and Smith, P.K. (1996), Theories of Theories of Mind, Cambridge University Press, Cambridge.

Casado-Diaz, A.B. and Más-Ruíz, F.J. (2002), “The consumer's reaction to delays in service”, International Journal of Service Industry Management, Vol. 13 No. 2, pp. 118-140.

Castelo, N. (2019), “Blurring the line between human and machine: marketing artificial intelligence”, (Doctoral dissertation), Columbia University, available at: https://academiccommons.columbia.edu/doi/10.7916/d8-k7vk-0s40 (accessed 7 November 2019).

Chan, A.P.H. and Tung, V.W.S. (2019), “Examining the effects of robotic service on brand experience: the moderating role of hotel segment”, Journal of Travel and Tourism Marketing, Vol. 36 No. 4, pp. 458-468.

Choi, S. and Mattila, A.S. (2008), “Perceived controllability and service expectations: influences on customer reactions following service failure”, Journal of Business Research, Vol. 61 No. 1, pp. 24-30.

Clark, H.H. (1992), Arenas of Language Use, University of Chicago Press, Chicago.

Coffee, P. and Rees, T. (2008), “The CSGU: a measure of controllability, stability, globality, and universality attributions”, Journal of Sport and Exercise Psychology, Vol. 30, pp. 611-641.

Colby, C.L. and Parasuraman, A. (2016), “Service robotics: how ready are consumers to adopt and what drives acceptance?”, In 25th Annual Frontiers in Services Conference, June 24, 2016.

Cortiñas, M., Chocarro, R. and Elorz, M. (2019), “Omni-channel users and omni-channel customers: a segmentation analysis using distribution services”, Spanish Journal of Marketing-ESIC, Vol. 23 No. 3, pp. 415-436.

Dabholkar, P.A. and Spaid, B.I. (2012), “Service failure and recovery in using technology-based self-service: effects on user attributions and satisfaction”, The Service Industries Journal, Vol. 32 No. 9, pp. 1415-1432.

De Keyser, A., Köcher, S., Alkire, L., Verbeeck, C. and Kandampully, J. (2019), “Frontline Service Technology infusion: conceptual archetypes and future research directions”, Journal of Service Management, Vol. 30 No. 1, pp. 156-183.

Gelbrich, K. (2010), “Anger, frustration, and helplessness after service failure: coping strategies and effective informational support”, Journal of the Academy of Marketing Science, Vol. 38 No. 5, pp. 567-585.

Gernigon, C. and Delloye, J. (2003), “Self-efficacy, causal attribution, and track athletic performance following unexpected success or failure among elite sprinters”, The Sport Psychologist, Vol. 17, pp. 55-76.

Hays, W.L. (1963), Statistics for Psychologists, Holt Rinehart and Winston, New York, NY.

Heater, B. (2019), “Robot army”, available at: https://techcrunch.com/2019/03/17/these-are-the-robots-that-help-you-get-your-amazon-packages-on-time/?guccounter=1&guce_referrer_us=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_cs=ft8d2ayRZ8w8HRWLKrmCWg (accessed 27 March 2019).

Heider, F. (1958), The Psychology of Interpersonal Relations, Wiley, New York.

Hellström, T. (2013), “On the moral responsibility of military robots”, Ethics and Information Technology, Vol. 15 No. 2, pp. 99-107.

Hennig-Thurau, T. (2004), “Customer orientation of service employees: its impact on customer satisfaction, commitment, and retention”, International Journal of Service Industry Management, Vol. 15 No. 5, pp. 460-478.

Hess, R.L. Jr, Ganesan, S. and Klein, N.M. (2007), “Interactional service failures in a pseudorelationship: the role of organizational attributions”, Journal of Retailing, Vol. 83 No. 1, pp. 79-95.

Huang, M.H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, Vol. 21 No. 2, pp. 155-172.

Iglesias, V. (2009), “The attribution of service failures: effects on consumer satisfaction”, Service Industries Journal, Vol. 29 No. 2, pp. 127-141.

Ji, M. (2017), “Are robots good fiduciaries? Regulating robo-advisors under the investment advisers act of 1940”, Columbia Law Review, Vol. 117, pp. 1543-1583.

Johnson, D.G. (2015), “Technology with no human responsibility?”, Journal of Business Ethics, Vol. 127 No. 4, pp. 707-715.

Joyeux, S. and Albiez, J. (2011), “Robot development: from components to systems”, 6th National Conference on Control Architectures of Robots, INRIA Grenoble Rhône-Alpes, May 2011, Grenoble, France, 15 p., inria-00599679.

Krämer, N.C. voc der Pütten, A. and Eimler, S. (2012), “Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction”,in HumanCcomputer Interaction: The Agency Perspective, Springer, Berlin, Heidelberg, pp. 215-240.

Kaipainen, K., Ahtinen, A. and Hiltunen, A. (2018), “Nice surprise, more present than a machine: experiences evoked by a social robot for guidance and edutainment at a city service point”, in Proceedings of the 22nd International Academic Mindtrek Conference (Mindtrek '18). ACM, New York, NY, USA, pp. 163-171, available at: https://doi.org/10.1145/3275116.3275137 (accessed 7 November 2019).

Kelley, H.H. (1973), “The process of causal attribution”, American Psychologist, Vol. 28 No. 1, pp. 107-128.

Kim, Y.S.K. and Smith, A.K. (2005), “Crime and punishment: examining customers' responses to service organizations' penalties”, Journal of Service Research, Vol. 8 No. 2, pp. 162-180.

King, E.B., Shapiro, J.R., Hebl, M.R., Singletary, S.L. and Turner, S. (2006). “The stigma of obesity in customer service: a mechanism for remediation and bottom-line consequences of interpersonal discrimination”, Journal of Applied Psychology, Vol. 91 No. 3, pp. 579-593.

Kirk, R.E. (2007), “Effect magnitude: a different focus”, Journal of Statistical Planning and Inference, Vol. 137 No. 5, pp. 1634-1646.

Kong, M. and Jogaratnam, G. (2007), “The influence of culture on perceptions of service employee behavior”, Managing Service Quality: International Journal, Vol. 17 No. 3, pp. 275-297.

Krämer, N.C., von der Pütten, A. and Eimler, S. (2012), “Human-agent and human-robot interaction theory: similarities to and differences from human-human interaction”, in Human-Computer Interaction: The Agency Perspective, Springer, Berlin, Heidelberg, pp. 215-240.

Kumar, V., Rajan, B., Gupta, S. and Dalla Pozza, I. (2019), “Customer engagement in service”, Journal of the Academy of Marketing Science, Vol. 47 No. 1, pp. 138-160.

Kunz, W.H., Heinonen, K. and Lemmink, J.G.A.M. (2019), “Future service technologies – is service research on track with business reality?”, Journal of Services Marketing, Vol. 33, No. 4, pp. 479-487.

Kuo, C.M., Chen, L.C. and Tseng, C.Y. (2017), “Investigating an innovative service with hospitality robots”, International Journal of Contemporary Hospitality Management, Vol. 29 No. 5, pp. 1305-1321.

Larivière, B., Bowen, D., Andreassen, T.W., Kunz, W., Sirianni, N.J., Voss, C., Wünderlich, N. and De Keyser, A. (2017). “‘Service Encounter 2.0’: an investigation into the roles of technology, employees and customers”, Journal of Business Research, Vol. 79, pp. 238-246.

Liao, H. and Chuang, A. (2004), “A multilevel investigation of factors influencing employee service performance and customer outcomes”, Academy of Management Journal, Vol. 47 No. 1, pp. 41-58.

Lovelock, C. and Gummesson, R. (2004), “Whither services marketing? In search of a new paradigm and fresh perspectives”, Journal of Service Research, Vol. 7 No. 1, pp. 20-41.

Marinova, D., de Ruyter, K., Huang, M.H., Meuter, M.L. and Challagalla, G. (2017), “Getting smart: learning from technology-empowered frontline interactions”, Journal of Service Research, Vol. 20 No. 1, pp. 29-42.

Matthias, A. (2004), “The responsibility gap: ascribing responsibility for the actions of learning automata”, Ethics and Information Technology, Vol. 6 No. 3, pp. 175-183.

Mende, M., Scott, M.L., van Doorn, J., Grewal, D. and Shanks, I. (2019), “Service robots rising: how humanoid robots influence service experiences and elicit compensatory consumer responses”, Journal of Marketing Research, Vol. 56 No. 4, pp. 535-556.

Meuter, M.L., Ostrom, A.L., Roundtree, R.I. and Bitner, M.J. (2000), “Self-service technologies: understanding customer satisfaction with technology-based service encounters”, Journal of Marketing, Vol. 64 No. 3, pp. 50-64.

Nguyen, R. (2016), “Restaurants in China are replacing waiters with robots”, Business Insider, July 26, 2016. available at: https://www.businessinsider.com/chinese-restaurant-robot-waiters-2016-7?international=true&r=US&IR=T (accessed 29 May 2019).

Nijssen, E.J., Schepers, J. and Belanche, D. (2016), “Why did they do it? How customers' self-service technology introduction attributions affect the customer-provider relationship”, Journal of Service Management, Vol. 27 No. 3, pp. 276-298.

Oliver, R.L. and DeSarbo, W.S. (1988), “Response determinants in satisfaction judgments”, Journal of Consumer Research, Vol. 14 No. 4, pp. 495-507.

Palmatier, R.W., Scheer, L.K. and Steenkamp, J.B.E.M. (2007), “Customer loyalty to whom? Managing the benefits and risks of salesperson-owned loyalty”, Journal of Marketing Research, Vol. XLIV, May, pp. 185-199.

Parasuraman, A. and Colby, C.L. (2015), “An updated and streamlined technology readiness index: TRI 2.0”, Journal of Service Research, Vol. 18 No. 1, pp. 59-74.

Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985), “A conceptual model of service quality and its implications for future research”, Journal of Marketing, Vol. 49 No. 4, pp. 41-50.

Rafaeli, A., Altman, D., Gremler, D.D., Huang, M.-H., Grewal, D., Iyer, B., Parasuraman, A. and de Ruyter, K. (2017), “The future of frontline research: invited commentaries”, Journal of Service Research, Vol. 20 No. 1, pp. 91-99.

Rahman, R. (2017), “First robot restaurant launched in Dhaka”, Dhaka Tribune, November 15, 2017. available at: https://www.dhakatribune.com/bangladesh/dhaka/2017/11/15/first-robot-restaurant-dhaka (accessed 7 November 2019).

Reinders, M., Frambach, R. and Kleijnen, M. (2015), “Mandatory use of technology-based self-service: does expertise help or hurt?”, European Journal of Marketing, Vol. 49 Nos 1/2, pp. 190-211.

Russell, D. (1982), “The Causal Dimension Scale: a measure of how individuals perceive causes”, Journal of Personality and Social Psychology, Vol. 42 No. 6, pp. 1137-1145.

Seltman, H.J. (2012), Experimental Design and Analysis, Carnegie Mellon University, Pittsburgh, PA, (2012) available at: http://www.stat.cmu.edu/∼hseltman/309/Book/Book.pdf (accessed 7 November 2019).

Smith, A.K., Bolton, R.N. and Wagner, J. (1999), “A model of customer satisfaction with service encounters involving failure and recovery”, Journal of Marketing Research, Vol. 36 No. 3, pp. 356-372.

Specht, N., Fichtel, S. and Meyer, A. (2007), “Perception and attribution of employees' effort and abilities: the impact on customer encounter satisfaction”, International Journal of Service Industry Management, Vol. 18 No. 5, pp. 534-554.

Statista (2019), “Robo-advisors worldwide report”, available at: https://www.statista.com/outlook/337/100/robo-advisors/worldwide (accessed 7 November 2019).

Sternberg, R.J. (2005), “The theory of successful intelligence”, Interamerican Journal of Psychology, Vol. 39 No. 2, pp. 189-202.

Swanson, S.R. and Davis, C.J. (2003), “The relationship of differential loci with perceived quality and behavioral intentions”, Journal of Services Marketing, Vol. 17 No. 2, pp. 202-219.

Swanson, S.R. and Kelley, S.W. (2001), “Service recovery attributions and word-of-mouth intentions”, European Journal of Marketing, Vol. 35 Nos 1/2, pp. 194-211.

Trampe, D., Konuş, U. and Verhoef, P.C. (2014), “Customer responses to channel migration strategies toward the E-channel”, Journal of Interactive Marketing, Vol. 28 No. 4, pp. 257-270.

Tsarenko, Y., Strizhakova, Y. and Otnes, C.C. (2019), “Reclaiming the future: understanding customer forgiveness of service transgressions”, Journal of Service Research, Vol. 22 No. 2, pp. 139-155.

Tsiros, M., Mittal, V. and Ross, W.T. Jr (2004), “The role of attributions in customer satisfaction: a reexamination”, Journal of Consumer Research, Vol. 31 No. 2, pp. 476-483.

Tussyadiah, I.P. and Park, S. (2018), “Consumer evaluation of hotel service robots”, in Stangl, B. and Pesonen, J. (Eds), Information and Communication Technologies in Tourism, Springer, pp. 308-320.

Tussyadiah, I.P., Zach, F.J. and Wang, J. (2019), “Do travelers trust intelligent service robots?”, Annals of Tourism Research, Vol. 81 No. C, 102886.

Van Doorn, J., Mende, M., Nobble, S.M., Hulland, J., Ostrom, A.L., Grewal, D. and Petersen, J.A. (2017), “Domo Arigato Mr. Roboto: emergence of automated social presence in organizational frontlines and customers' service experiences”, Journal of Service Research, Vol. 20 No. 1, pp. 43-58.

Van Vaerenbergh, Y., Orsingher, C., Vermeir, I. and Larivière, B. (2014), “A meta-analysis of relationships linking service failure attributions to customer outcomes”, Journal of Service Research, Vol. 17 No. 4, pp. 381-398.

Walters, M.L., Syrdal, D.S., Dautenhahn, K., Te Boekhorst, R. and Koay, K.L. (2008). “Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion”, Autonomous Robots, Vol. 24 No. 2, pp. 159-178.

Weiner, B. (1979), “A theory of motivation for some classroom experiences”, Journal of Educational Psychology, 1979, Vol. 71, No 1, pp. 3-25.

Weiner, B. (1986), An Attributional Theory of Motivation and Emotion, Springer, New York.

Weiner, B. (2000), “Attributional thoughts about consumer behavior”, Journal of Consumer Research, Vol. 27 No. 3, pp. 382-387.

Wilder, K.M., Collier, J.E. and Barnes, D.C. (2014), “Tailoring to customers' needs: understanding how to promote an adaptive service experience with frontline employees”, Journal of Service Research, Vol. 17 No. 4, pp. 446-459.

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Vol. 29 No. 5, pp. 907-931.

Zhu, Z., Nakata, C., Sivakumar, K. and Grewal, D. (2013), “Fix it or leave it? Customer recovery from self-service technology failures”, Journal of Retailing, Vol. 89 No. 1, pp. 15-29.

Acknowledgements

This research was supported by the European Social Fund and the Government of Aragon (LMP65_18; Research Group “METODO” S20_17R).

Corresponding author

Jeroen Schepers can be contacted at: J.J.L.Schepers@tue.nl

Related articles