To serve and protect: a typology of service robots and their role in physically safe services

Jeroen Schepers (Department of Industrial Engineering, Eindhoven Artificial Intelligence Systems Institute (EAISI), Eindhoven University of Technology, Eindhoven, The Netherlands)
Sandra Streukens (Department of Marketing and Strategy, Hasselt University, Diepenbeek, Belgium)

Journal of Service Management

ISSN: 1757-5818

Article publication date: 25 January 2022

Issue publication date: 28 February 2022

3553

Abstract

Purpose

Although consumers feel that the move toward service robots in the frontline so far was driven by firms' strive to replace human service agents and realize cost savings accordingly, the COVID-19 pandemic has led customers to appreciate frontline robots' ability to provide services in ways that keep them safe and protected from the virus. Still, research on this topic is scant. This article offers guidance by providing a theoretical backdrop for the safety perspective on service robots, as well as outlining a typology that researchers and practitioners can use to further advance this field.

Design/methodology/approach

A typology is developed based on a combination of a theory- and practice-driven approach. Departing from the type of behavior performed by the service robot, the typology synthesizes three different service robot roles from past literature and proposes three new safety-related role extensions. These safety-related roles are derived from a search for examples of how service robots are used in practice during the COVID-19 pandemic.

Findings

The typology's roles are corroborated by discussing relevant robot implementations around the globe. Jointly, the six roles give rise to several ideas that jointly constitute a future research agenda.

Originality/value

This manuscript is (one of) the first to provide in-depth attention to the phenomenon of service customers' physical safety needs in the age of service robots. In doing so, it discusses and ties together theories and concepts from different fields, such as hierarchy of needs theory, evolutionary human motives theory, perceived risk theory, regulatory focus theory, job demand–resources theory, and theory of artificial intelligence job replacement.

Keywords

Citation

Schepers, J. and Streukens, S. (2022), "To serve and protect: a typology of service robots and their role in physically safe services", Journal of Service Management, Vol. 33 No. 2, pp. 197-209. https://doi.org/10.1108/JOSM-11-2021-0409

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Jeroen Schepers and Sandra Streukens

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode.


Introduction

Service robots are autonomous and adaptable interfaces that interact with and deliver service to an organization's customer (Wirtz et al., 2018). Evidently, service robots may bring many advantages to service firms, such as a more consistent service quality, higher service delivery capacity and lower operating costs compared to human employees. Because these advantages only materialize when customers accept these new frontline agents, much academic research effort has been devoted to investigating customer responses to robots, such as adoption (Wirtz et al., 2018) or service evaluation (McLeay et al., 2020; Yoganathan et al., 2021). These works made clear that individuals compare robot qualities to human employee qualities, and many customers feel that the move toward service robots in the frontline so far was driven by firms' strive to replace human service agents and realize cost savings accordingly (Belanche et al., 2021a).

However, as the COVID-19 pandemic unfolded, it became clear that customers may have nuanced this view somewhat, or at least started to appreciate another quality in robots: the ability to provide services in ways that keep customers safe and protected from the virus. For example, service robots are employed to draw customers' attention to safety rules. In shopping malls around the world, Pepper detects whether visitors are wearing a face mask to their own and others' safety and if not, politely reminds them to put one on. In addition, in several Chinese hospitals, service robot Ari measures COVID-patients' temperature using a thermal camera in its head and interacts with patients to reduce their feelings of social isolation. Robots also provide access to services that were limited or unavailable because of government restrictions, or that otherwise would have been left unconsumed by individuals concerned with their personal safety.

In contrast to the ever-growing instances of pandemic-related use of service robots, academic work on the association between service robots and consumer's safety perceptions is still rather limited. Bove and Benoit (2020) suggest that service robots may act as a safety signal: an indicator of otherwise hidden qualities, which consumers can interpret and act upon to reduce their feelings of uncertainty. Service robots are not the core focus of their work though and are merely mentioned as one of many potential safety signals that service providers can employ, alongside actions such as staff protective shields or one-way traffic flows. Henkel et al. (2020) discuss a typology that outlines how robotic transformative service may help to counter social isolation, some of it caused by the pandemic. Finally, Kim et al. (2021) empirically demonstrate that consumers have a more positive attitude toward robot-staffed (vs human-staffed) hotels during the COVID-19 pandemic because of a heightened perceived health threat and their concern for personal safety.

There are at least two important arguments to further advance these works and more deeply understand the relationship between service robots and customer safety perceptions through an academic lens. First, some changes in consumers' safety norms and preferences in service interactions are likely to outlive the COVID-19 pandemic (Hazée and Van Vaerenbergh, 2020). Second, the continuous upswings in terms of COVID-19 cases worldwide and the likelihood of other pandemics in the future reiterate or even further strengthen consumers' safety concerns.

In this light, the current paper offers guidance to future research efforts in the following ways. First of all, we provide a multifaceted theoretical backdrop for the safety perspective on service robots. Second, building on recent advances in the service technology literature in combination with real-life implementations of service robots, we propose a typology on how service robots in the frontline can be used to optimize service safety. Third, and using the proposed typology as a point of departure, we conclude this paper by outlining several directions for future research to advance our knowledge on how to enhance service safety through the effective and strategic use of frontline service technology.

Theories and concepts relevant to service robots and safety

In general, safety reflects a state of being protected against different sorts of damages (Conci et al., 2009). In their recent article, Berry et al. (2020) further detail this general perspective by delineating three types of safety in services: emotional safety, financial safety and physical safety. The first reflects being protected from mental health issues arising from pandemic-related developments (e.g. the loss of human connections), while financial safety indicates being protected from economic stress (e.g. income losses). We concentrate on the element of physical safety, that is: being protected from viral transmission during the service encounter.

Several theories are relevant in understanding the importance of service safety. Table 1 summarizes and links key theories to the use of service robots to enhance physical safety. The theories and their applicability to the service safety concept are described in more detail in the subsequent section and future research directions.

At a fundamental level, the importance of service safety is in line with Maslow's (1943) classical hierarchy of needs theory, which holds that satisfying one's need for safety is a precondition for individuals to attend to needs higher up hierarchy, such as belonging, esteem, and self-actualization. Through addressing the innate human safety need in a pandemic, service robots seem to have become a typical example of a technology “double-boom cycle,” where development and adoption are initially associated with technology-push and only later with a market-pull mechanism (Schmoch, 2007).

As another perspective, evolutionary human motives theory outlines the deep-seated motive of disease avoidance (Griskevicius and Kenrick, 2013) to explain that in situations such as pandemics, consumers behave in ways designed to thwart viral transmission, such as wearing face masks, handwashing and becoming more socially avoidant (Fleischman et al., 2011). Indeed, the COVID-19 pandemic heightened customers' contamination concerns because of the inherent danger of the proximity of other people in the servicescape. Consumers also realize that objects may have been in physical contact with someone else and could have been soiled through the transfer of germs or residue (Nemeroff and Rozin, 1994). Given robot's safety qualities, it is clear that the adoption of service robots can be added to consumers' palette of actions to thwart viral transmission.

Apart from the innate human motives outlined by, among others, Griskevicius and Kenrick (2013) and Maslow (1943), there are several streams in literature that provide a more elaborate cognitive explanation of consumers' openness to service robots following the COVID-19 pandemic. For instance, perceived risk theory focuses on the uncertainty and adverse consequences of buying a product or consuming a service. Jacoby and Kaplan (1972) introduce financial, performance, psychological, social and physical risk, where the latter entails the chance that an unfamiliar product or service may be harmful or injurious to one's health. Dowling and Staelin (1994) hold that consumers compare situation-specific risk to their level of acceptable risk, which determines their further consumer decision-making process. This process may include strategies such as searching for more information to fine-tune risk assessments. Spence's (1974) signaling theory provides further substantiation to the function of such pieces of information. According to this theory, individuals search for observable signs (i.e. signals) that give them information about expected outcomes (e.g. is the service provider concerned about my health?). The use of explicit safety signals, performed by a service robot, thus can potentially reduce consumer uncertainty in terms of health-related fears (Bente et al., 2012; Bove and Benoit, 2020). Other strategies to mitigate risk in the consumer decision process include permanently or temporarily stopping or reducing consumption (e.g. not going to the store or visiting less frequently) or making alternative consumption choices (Yeung and Morris, 2001). Using service robots rather than engaging with human employees can also be regarded as an adaptation of the consumption pattern as it reduces contamination probability and, thus, risk perceptions.

In an online shopping context, Van Noort et al. (2008) demonstrate that safety cues (e.g. guarantees, warrantees, a transparent privacy policy) lower levels of risk perception and even increase attitude and loyalty toward the retailer, but only for individuals characterized by prevention-focused self-regulation. When we extrapolate these insights to the focus of our current work, we posit that when consumers are concerned with negative outcomes and with safety and responsibility, such as in times of a health crisis, service robots become an effective mechanism to enhance service provider perceptions. Building on regulatory focus theory, we posit that when consumers' regulatory system shifts to a promotion focus (see Higgins et al., 1994), safety cues such as service robots become less effective in enhancing provider perceptions. Service providers that dare to be bold and innovative during tough economic and societal times may thus be rewarded with consumer goodwill and patronage.

Another important perspective may come from studies on physical safety in the workplace. Physical safety is concerned with actions or behaviors that individuals exhibit to promote one's personal health and that of coworkers (Burke et al., 2002). In this research stream, the work of Nahrgang et al. (2011) builds on the job demands–resources (JD-R) model (Bakker and Demerouti, 2007) to relate physical risks and hazards to human behavior in the workplace. Physical threats represent job demands, which are aspects of the job that require sustained effort or skills to deal with. The constant awareness of and attention to physical risks and hazards deplete employees' energy and may cause health problems. However, the organizational context (e.g. leader, peers, etcetera) may be supportive by offering advice and assistance in safety practices, as well as reward and celebrate safety success such as achieving zero accidents at work (Guo et al., 2016). This supportive context thus serves as a job resource to employees.

Translating the above insights to a consumer context, it is clear that pandemic-related safety concerns strain consumers because of the constant fear for contamination and the (self-) imposed adaptive measures (e.g. face masks, washing hands, social distancing) when engaging in service encounters. The prolonged duration of such a situation impairs consumers' health and, from a business perspective, leads to undesired consumer behavior: withdrawal from service transactions. However, service robots are a resource to customers because they alleviate the concern for contamination and enable service encounters with less limitations. For instance, consumers may not have to wear a face mask and can come closer to a robot than a human employee. Interestingly, robots – and service technology in general – may so far have been regarded as a “service demand” by consumers, but the pandemic has likely altered this view such that robots are now seen as a “service resource.” In other words, where service robots were regarded as a depersonification of the service interface that needed considerable “getting used to,” they now address the need for safety and enable the achievement of consumers' (service) goals.

Toward a typology of safety-related robots roles

Given the importance of customer safety, an important topic for service researchers and practitioners is to better understand how service robots can be used to enhance customers' safety perceptions. To this aim, we propose a typology based on a combination of a theory- and practice-driven approach (cf. Story et al., 2020). Drawing on recent frontline service technology literature and the need for physical safety as a result of the current pandemic, we propose that service robots can fulfill six different roles: three roles that synthesize previous work on service robots, and three safety-related extentions of those roles. All roles are summarized in Table 2. To illustrate these different roles, the remainder of this section uses exemplar vignettes from business practice. These vignettes stem from a search for examples of how service robots are used in practice during the COVID-19 pandemic across different sources such as recent Internet articles (e.g. https://www.koreatimes.co.kr), websites of robot developers and manufacturers (e.g. https://www.softbankrobotics.com/emea/en) and/or forums (e.g. https://www.covid19robots.org) [1].

Service robot roles

Past works in the service robot domain hold that customers expect robots to perform well in the service delivery and in the service process (Fernandes and Oliveira, 2021; Lu et al., 2020). The former means that robots should have the functional quality of being competent to provide the core service. The latter means that customers value robots' social–emotional qualities such as warmth (Belanche et al., 2021b), but also information-related qualities. For instance, customers like to stay informed on the steps of automated service processes to more clearly perceive their own role (Meuter et al., 2005). They also like to be informed when they will be served by a robot and when a human will take over (Mozafari et al., 2021).

As such, seminal works by Wirtz et al. (2018) and Huang and Rust (2018) hold that service robots can be implemented to fulfill functional, information-sharing, and/or social–emotional roles. A functional role focuses on delivering the task-oriented parts of the service (e.g. hotel housekeeping). This type of behavior is closely related to one of the concepts in Huang and Rust's (2018) theory of artificial intelligence (AI) job replacement. Specifically, these authors refer to mechanical AI as the ability to automatically perform routine, repeated tasks. Imbued with this particular intelligence, service robots are used to perform high-frequency tasks that require only simple cognitive-analytical skills. In general, service robots outperform humans on these behaviors, and it is expected that customer adoption of service robots performing simple tasks will be quick and smooth (Wirtz et al., 2018). In Table 2, this role is labeled as the service robot performing as a functional server. Functional server examples are plentiful and span a wide range of service industries. A number of hospitals in Thailand employ a robot called Pinto to help reduce staff's workload and increase service efficiency by carrying out tasks such as delivering food and medicines to patients. The Dadawan restaurant in The Netherlands uses multiple service robots to take over waiter tasks such as showing guests to their table, serving food, and cleaning up tables after dinner. And at Pittsburg Airport, service robots are used for cleaning tasks.

The information-sharing role, described in Table 2 as service assistant, relates to sharing or gathering customers' information to serve their needs more effectively (Gremler and Gwinner, 2008). Examples of this role include giving advice, answering questions and sharing knowledge. Consistent with the idea of thinking AI (Huang and Rust, 2018), the service robot needs to be able to learn and adapt during the interaction with the customer. A service robot's information-sharing tasks may range from analytical (e.g. answering simple, standard questions) to intuitive (e.g. solving customer problems). Although service robots are capable of performing both types of information sharing behavior, the intuitive tasks require more advanced technology. Illustrative examples of the robot as a service assistant include the LoweBot that helps customers find the goods they are looking for in the Lowe retail stores in the San Francisco Bay area. More specifically, customers can ask the LoweBot simple questions, and the LoweBot then informs the customers about the appropriate shelf location of the particular goods. In a similar vein, the Jan Portaels hospital in Belgium intends to expand its use of service robot by programming Cruzr to inform visitors to help them find their way in the hospital.

The social–emotional role focuses on developing a personal bond with a customer by, for instance, using humor, showing empathy, or recognizing a customer (Gremler and Gwinner, 2008). Social–emotional behaviors are essential for maintaining customer relationships in which communication, understanding, and the overall experience are critical. However, Wirtz et al. (2018) point out that for customers and service robots to socially interact effectively requires that customers' needs, their perceptions of a robot's social skills and performance are aligned. In line with the work of Kaipainen et al. (2018), Belanche et al. (2020) state that given the current stage of technological development, this may currently still be one bridge too far. In essence, this would require robots capable of displaying empathetic intelligence (Huang and Rust, 2018). The idea that robots can connect with customers on a social–emotional level is captured by the term social partner in Table 2. Although the full potential in terms of service robots' social–emotional behavior may not yet be realized, real-life applications of robots performing simple and repetitive social tasks to engaging in quasi-social interactions exist. An example of relatively simple social task is robot Peanut, which is used in the Belgian restaurant XingXing to greet children and play a song at their birthday. The previously mentioned LoweBot not only informs customers about where to find products, but also proactively rolls up to customers, greets them and asks if it can help them. Advinia Healthcare in the United Kingdom offers an example of service robots in the role of social partner involving a more complex quasi-social interaction. In their care homes, Pepper is used to hold simple conversations with residents and learn about their interests, play their favorite music and teach them different languages.

Safety-related role extensions

As the pandemic hit, the three key roles described above have been expanded to cater to customers' needs for physical safety. That is, robots are employed in a service setting to fulfill functional, information-sharing, or social–emotional roles with an explicit link to instill feelings of safety. These safety-related roles are referred to in Table 2 by safety supervisor, safety informer, and safe social enabler.

As a safety supervisor, the service robot performs functional behaviors that are intended to enhance customers' safety perceptions by checking and supporting customers' compliance with relevant health-related safety measures. Examples of service robots fulfilling the role of safety supervisor are evidenced in business practice around the world. For instance, the Belgian chain of electronics stores AUVA use robots to detect feverish customers by automatically measuring their body temperature using an infrared camera while simultaneously checking whether customers' facial masks are worn correctly. In a similar vein, Icheon Airport in South Korea uses robots to measure passengers' body temperature and to dispense hand sanitizer. Another example involves the use of the Robovie robot in a Japanese retail store to check whether customers keep enough distance when standing in line.

The safety informer role involves service robots that encourage good health practices by informing customers about the safety measures applicable in the service environment. An example of safety informing robots is robot LISA that travels around in Thailand's Central World shopping mall to help direct people to the nearest bathroom and to remind them to wear their mandatory face masks. Likewise, German supermarket chain Edeka uses Pepper to remind customers about protective measures such as keeping sufficient distance, using one-way aisles and the use of facial masks. In contrast to the safety supervisor role, the safety informer role is less enforcing as the desired behavior is not being checked or enforced by the service robot.

As a safe social enabler, the service robot mediates and enables nonphysical human-to-human interactions via video calls that feature the display of the other party on a screen. For example, an elderly care home in Belgium uses robot James to help residents stay in touch with friends and family during the pandemic via video chat. In a similar vein, Advinia Healthcare in the United Kingdom offers an example of service robots in the role of social partner involving a more complex quasi-social interaction. In their care homes, Pepper is used to hold simple conversations with residents and learn about their interests, play their favorite music, and teach them different languages. The robot thus combats loneliness and alleviates staff shortages in COVID times. As a final example, the Kerala Government Hospital in India mimics face-to-face patient–doctor interaction by using the KARMI-Bot.

Future research and closing thoughts

With our discussion of literature relevant to the topic of robots and consumer safety, and the construction of our typology, we aim to inspire new research in this important domain. Apart from the ideas raised earlier (e.g. studying service robots from a JD-R or regulatory focus perspective), we provide some additional areas for further development in this section. These are summarized in Table 3 and discussed in more detail below.

First, our discussion and typology generally abstract from the concept of anthropomorphism, which reflects the extent to which customers perceive service robots as human-like (Blut et al., 2021). Research has already demonstrated that the human likeness of a robot has intricate relationships with perceived service value (Belanche et al., 2021b), but it yet remains to be explored how anthropomorphism relates to individuals' perceptions of safety. Perhaps human-like robots make people feel in the presence of another social entity (van Doorn et al., 2017) and thus lead customers to activate cognitive associations to more “traditional” service encounters. This may make thoughts of viral transmission and contamination more salient, thus letting people perceive highly anthropomorphic robots as less safe. A related question is whether the robot safety roles in our typology and anthropomorphism combine synergetically or not. In other words, does making a robot more anthropomorphic make people more or less obedient to a safety informer or a safety supervisor? This could be the case as anthropomorphic appearance enriches the communication between human and robot to include more subtle, nonverbal cues. Given that the cost of service robot development and implementation dramatically increases with higher levels of anthropomorphism (Hornyak, 2019), companies need to know where to put their money. A question that follows from the previous ones is whether using service robots in a safety role leads to conflicting outcomes or not. Specifically, scholars may consider perceived service quality, perceived value, customer satisfaction, perceived service safety, customer obedience to service rules, or customer sabotage as potential consequences of robot implementation. Although it could be that customers obey the safety instructions of a service robot, they may not be very satisfied with the service process. How can service providers align such potentially incongruent outcomes?

Another interesting avenue to consider is whether the effectiveness of the robots' safety role depends on their anthropomorphism (i.e. as suggested above) or on their intelligence level. As explained earlier, Huang and Rust (2018) specify four levels of intelligence in service tasks (i.e. mechanical, analytical, intuitive, and empathetic intelligence). It is interesting to uncover whether outcomes are achieved through a different mechanism when considering the robots' intelligence level or its anthropomorphism. That is, we suggested that anthropomorphic robots may create feelings of social presence and enable “richer” (i.e. nonverbal) forms of communication. Hence, these aspects may mediate between anthropomorphism and customers' obedience of robot instructions, but also perceived safety, perceived value and other dependent variables of interest. This would extend past work on robot anthopomorphism with a safety perspective, as the dominant focus so far has been on “intention to use” as a dependent variable (Blut et al., 2021). Compared to anthropomorphism though, how increased intelligence levels of a robot translate to safety-relevant outcomes may be captured by different constructs that capture the mutual understanding of the human and robot counterpart in the exchange. Potential mechanisms could include, but are not limited to, rapport (Gremler and Gwinner, 2008), perceived personalization and adaptability (Chebat and Kollias, 2000), customer orientation (Hennig-Thurau, 2004), and emotional intelligence (Kidwell et al., 2011).

An important question with regard to robots' intelligence levels is: which intelligence level is needed for which safety role? For instance, when Singapore recently trialed patrol robots to police “undesirable” safety behavior such as breaching social-distancing rules, instant concerns were that these machines would be less capable than human officers to take contextual factors into account (The Guardian, 2021). In terms of our typology, this would mean that although the role of safety supervisor could technically be fulfilled by a robot with mechanical AI, empathetic intelligence may be needed in some cases, for instance, when a consumer refuses to wear a face mask and may become aggressive.

In addition, given the above consideration and the fact that people generally still prefer the human touch for tasks that involve subtle judgments and emotions (Castelo et al., 2019), it is likely that services will be increasingly provided by human–robot teams (e.g. Wirtz et al., 2018). Related to the proposed typology, more research is needed with regard the design of such services, especially from a safety perspective. For instance, does the optimal combination of human and robot roles change along the customer journey? Rather than a single event, a service is often an experience involving multiple touchpoints. Customers may at first be very concerned about their safety, as people have more fear of being contaminated in the presence of strangers than of friends (Fell, 2021). However, as customers move further into the journey, they may increasingly see the service provider and its frontline as part of their social circle, given more leeway for human service employees. However, this relatively simple logic may be complicated by the fact that customers are heterogeneous and dynamic (Palmatier and Crecelius, 2019). The importance of physical safety varies largely across the population and may change over time for every individual – just think about experiencing a virus-related death of a friend or family member. Although very advanced data techniques are available to incorporate (changing) customer characteristics into marketing decisions, there is a great need for actionable insights on how these can be used to design customer experiences with an optimal degree of technology infusion in terms of satisfaction and safety at a macro level.

Finally, it is important to consider the employee perspective. In terms of employee safety, an interesting paradox exists between an increase in physical safety (i.e. less health-related risk due to less customer contact) and a decrease in job safety (i.e. fear of permanent job loss because of technological replacement) due to use of service robots (cf. Berry et al., 2020). So, an interesting research question is to understand the net effect of opposing safety perceptions on employees' attitude to service robots taking over particular facets of their jobs. Another interesting avenue would be to extend the consideration of physical safety to include psychological safety. In organizational sciences, this implies the belief that the workplace is safe for interpersonal risk taking (Frazier et al., 2017). Edmondson and colleagues (e.g. Edmondson, 1999; Edmondson et al., 2001) illustrated that the successful implementation of a new technology by top-tier cardiac surgery teams across hospitals was highly dependent on their ability to make employees feel safe to speak up, collaborate, experiment, and learn from failures. Hence, the successful introduction of organizational innovations depends on employees' feedback on and tweaks of these new ways of working. Translating this to the robot-inflused frontline, several questions pop up: Does the successful implementation of service robots depend on employees' feelings of psychological safety? Or, do employees feel more psychologically safe when their service provider implements a robot with a safety role, rather than one with a traditional service role?

In closing, the topic of service robots and consumer safety is likely to stay relevant as the current pandemic continues, and new future pandemics are a certainty. In fact, at the time of writing, the COVID-19 pandemic seems far from over. After regular life has resumed in countries with high levels of vaccinated people for some time, an increasing number of countries implemented more stringent rules in response to the rise of the omicron variant. For instance, the USA has seen an explosion of COVID-19 omicron cases and hospitalizations, with many parts of the country experiencing substantial or high levels of community transmission. Just before these events, even persistent New Zealand had abandoned the zero-COVID strategy toward living with the virus in the face of the highly contagious delta variant (Westcott, 2021). These developments further underscore the important role that service robots may fulfill in keeping service customers safe from physical harm.

Theories relevant to understanding service robots and their safety roles

Theory or conceptKey worksEssenceRelevance to service robots and physical safety
Hierarchy of needs theoryMaslow (1943)Five types of human needs drive individual behavior: physiological needs, safety needs, love and belonging needs, esteem needs, and self-actualization needs. People only move to the next, higher-level need when a lower-level need has been addressedThrough addressing the fundamental human safety need in a pandemic, service robots enable people to consume services that address needs higher up the hierarchy
Law of contagionNemeroff and Rozin (1994); Frazer (1959)People and objects that come into contact may influence each other through the transfer of their properties. The influence continues after the physical contact has ended and may be permanentThe servicescape has a large potential to transfer germs and residues from objects or other customers to a focal customer. By eliminating inter-human contact and touch interfaces, service robots limit the possibility of viral contagion
Evolutionary human motives theoryGriskevicius and Kenrick (2013)Deep-seated evolutionary motives influence consumer behavior, albeit not always in obvious or conscious ways. Fundamental motives include evading physical harm, avoiding disease, making friends, attaining status, acquiring a mate, keeping a mate, and caring for familyThe motive of disease avoidance explains that in situations like pandemics, consumers behave in ways designed to thwart viral transmission, such as wearing face masks, and hand-washing, but also being more open to using service robots
Signaling theorySpence (1974)Individuals search for observable signs (i.e. signals) that give them information about expected outcomes (e.g. is the service provider concerned about my health?)The use of explicit safety signals performed by a service robot, such as taking over temperate checks or face mask enforcement, can reduce consumer uncertainty in terms of health-related fears
Perceived risk theoryJacoby and Kaplan (1972)Perceived risk refers to customers' perceived uncertainty with regards to a (purchase of a) product or service, which may influence their purchase intention. The strength of this relationship may depend on personal predispositions such as risk aversionAs a dimension of perceived risk, physical risk indicates that a product or service may be harmful or injurious to one's health. Using service robots rather than engaging with human employees reduces physical risk perceptions and may open up consumers to interact with a service robot and (keep) consuming the service
Regulatory focus theoryHiggins et al. (1994)People can pursue goals with either a promotion or a prevention focus; the former focuses on the pursuit of aspirations, while the latter is concerned with safety and security needs. One's focus determines their sensitivity to positive or negative outcomesRobots provide a safety cue that is especially effective to increase attitude and loyalty towards the service provider when individuals have a prevention focus rather than a promotion focus. This condition applies more to consumers in times of a health crisis than in times of prosperity
Job demands- resources modelBakker and Demerouti (2007)Each job has demands and resources; the former can lead to a health-impairment process (e.g. chronic, intensive demands may lead to burnout), while the latter stimulates work engagement and may buffer any demand-induced stress effectsPandemic-related concerns are a demand that strain consumers and may lead to withdrawal from service transactions. Service robots are a resource to customers because they alleviate the concern for contamination and enable service encounters with less limitations
Theory of AI job replacementHuang and Rust (2018)AI developments can be categorized in four levels of intelligence in service tasks (i.e. mechanical, analytical, intuitive and empathetic intelligence). Robots will steadily develop these intelligences and (may) take over associated human tasksThe intelligence level of the service robot may determine which safety role the robot can fulfill in practice

A typology of safety-related robot roles

Role typeDescriptionSafety-related role extension
FunctionalFunctional server
Fulfilling orders, distributing food and medicine, cleaning, showing guests to their table
Safety supervisor
Checking customers' body temperature, checking whether customers wear their face masks correctly, dispensing hand sanitizer, enforcing sufficient distance between customers in a waiting line
Information sharingService assistant
Show customers around, answer customers' questions, provide service- or product-related information
Safety informer
Informing customers about protective measures such as wearing face masks, keeping distance and the use of one-way aisles
Social–emotionalSocial partner
Greeting or entertaining customers. Depending on the service robot's capabilities, the interaction can range from simple and repetitive social tasks to engaging in quasi-social interactions
Safe social enabler
Enabling social contact between parties via service robots that incorporate video-call technology. This social contact can be both professional and nonprofessional

Future research directions

ThemeResearch question
AnthropomorphismHow does robot anthropomorphism relate to individuals' perceptions of safety?
Do the robot safety roles in the typology and anthropomorphism combine synergetically or not?
OutcomesDoes using service robots in a safety role lead to conflicting outcomes when considering service quality, perceived value, customer satisfaction, perceived service safety, customer obedience to service rules or customer sabotage?
When outcomes of using service robots in a safety role are incongruent, how can service providers align such these outcomes?
Intelligence levelsDoes the effectiveness of the robots' safety role depend on their anthropomorphism or on their intelligence level?
Are outcomes achieved through a different mechanism when considering the robots' intelligence level or its anthropomorphism?
Which intelligence level is needed for which safety role?
Human–robot teamsDoes the optimal combination of human and robot roles change along the customer journey?
Is the optimal combination of human and robot roles different for various customer segments?
EmployeesWhat is the net effect of opposing safety perceptions on employees' attitude to service robots taking over particular facets of their jobs?
Does the successful implementation of service robots depend on employees' feelings of psychologically safety?
Do employees feel more psychologically safe when their service provider implements a robot with a safety role, rather than one with a traditional service role?

Note

1.

An overview of examples and sources is available from the authors on request.

References

Bakker, A.B. and Demerouti, E. (2007), “The job demands‐resources model: state of the art”, Journal of Managerial Psychology, Vol. 22 No. 3, pp. 309-328.

Belanche, D., Casaló, L.V., Flavián, C. and Schepers, J. (2020), “Robots or frontline employees? Exploring customers' attributions of responsibility and stability after service failure or success”, Journal of Service Management, Emerald Publishing, Vol. 31 No. 2, pp. 267-289.

Belanche, D., Casaló, L.V. and Flavián, C. (2021a), “Frontline robots in tourism and hospitality: service enhancement or cost reduction?”, Electronic Markets, Vol. 31, pp. 477-492.

Belanche, D., Casaló, L.V., Schepers, J. and Flavián, C. (2021b), “Examining the effects of robots' physical appearance, warmth, and competence in frontline services: the humanness-value-loyalty model”, Psychology and Marketing, Vol. 38 No. 12, pp. 2357-2376.

Bente, G., Baptist, O. and Leuschner, H. (2012), “To buy or not to buy: influence of seller photos and reputation on buyer trust and purchase behavior”, International Journal of Human-Computer Studies, Vol. 70 No. 1, pp. 1-13.

Berry, L.L., Danaher, T.S., Aksoy, L. and Keiningham, T.L. (2020), “Service safety in the pandemic age”, Journal of Service Research, Vol. 23 No. 4, pp. 391-395.

Blut, M., Wang, C., Wünderlich, N.V. and Brock, C. (2021), “Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI”, Journal of the Academy of Marketing Science, Vol. 49, pp. 632-658.

Bove, L.L. and Benoit, S. (2020), “Restrict, clean and protect: signaling consumer safety during the pandemic and beyond”, Journal of Service Management, Emerald Publishing, Vol. 31 No. 6, pp. 1185-1202.

Burke, M., Sarpy, S., Tesluk, P. and Smith-Crowe, K. (2002), “General safety performance: a test of a grounded theoretical model”, Personnel Psychology, Vol. 55, pp. 429-457.

Castelo, N., Bos, M.W. and Lehmann, D.R. (2019), “Task-dependent algorithm aversion”, Journal of Marketing Research, SAGE Publications, Vol. 56 No. 5, pp. 809-825.

Chebat, J.-C. and Kollias, P. (2000), “The impact of empowerment on customer contact employees' roles in service organizations”, Journal of Service Research, Vol. 3 No. 1, pp. 66-81.

Conci, M., Pianesi, F. and Zancanaro, M. (2009), “Useful, social and enjoyable: mobile phone adoption by older people”, Proceedings, Part I, presented at the Human-Computer Interaction – INTERACT 2009, 12th IFIP TC 13 International Conference, Uppsala, p. 76.

Dowling, G.R. and Staelin, R. (1994), “A model of perceived risk and intended risk-handling activity”, Journal of Consumer Research, Oxford University Press, Vol. 21 No. 1, pp. 119-134.

Edmondson, A.C. (1999), “Psychological safety and learning behavior in work teams”, Administrative Science Quarterly, Vol. 44 No. 2, p. 350.

Edmondson, A.C., Bohmer, R.M. and Pisano, G.P. (2001), “Disrupted routines: team learning and new technology implementation in hospitals”, Administrative Science Quarterly, Vol. 46 No. 4, p. 685.

Fell, L. (2021), “Trust and COVID-19: implications for interpersonal, workplace, institutional, and information-based trust”, Digital Government: Research and Practice, Vol. 2 No. 1, pp. 1-5.

Fernandes, T. and Oliveira, E. (2021), “Understanding consumers' acceptance of automated technologies in service encounters: drivers of digital voice assistants adoption”, Journal of Business Research, Vol. 122, pp. 180-191.

Fleischman, D.S., Webster, G.D., Judah, G., de Barra, M., Aunger, R. and Curtis, V.A. (2011), “Sensor recorded changes in rates of hand washing with soap in response to the media reports of the H1N1 pandemic in Britain”, BMJ Open, British Medical Journal Publishing Group, Vol. 1 No. 2, e000127.

Frazer, J.G. (1959), The Golden Bough: A Study in Magic and Religion, in Gaster, T.H. (Ed.), Macmillan, New York, available at: https://www.abebooks.com/book-search/title/the-golden-bough/author/frazer/ (accessed 22 November 2021).

Frazier, M.L., Fainshmidt, S., Klinger, R.L., Pezeshkan, A. and Vracheva, V. (2017), “Psychological safety: a meta-analytic review and extension”, Personnel Psychology, Vol. 70 No. 1, pp. 113-165.

Gremler, D.D. and Gwinner, K.P. (2008), “Rapport-building behaviors used by retail employees”, Journal of Retailing, Elsevier Science, Vol. 84 No. 3, pp. 308-324.

Griskevicius, V. and Kenrick, D.T. (2013), “Fundamental motives: how evolutionary needs influence consumer behavior”, Journal of Consumer Psychology, Vol. 23 No. 3, pp. 372-386.

Guo, B.H.W., Yiu, T.W. and González, V.A. (2016), “Predicting safety behavior in the construction industry: development and test of an integrative model”, Safety Science, Vol. 84, pp. 1-11.

Hazée, S. and Van Vaerenbergh, Y. (2020), “Customers' contamination concerns: an integrative framework and future prospects for service management”, Journal of Service Management, Emerald Publishing, Vol. 32 No. 2, pp. 161-175.

Henkel, A.P., Čaić, M., Blaurock, M. and Okan, M. (2020), “Robotic transformative service research: deploying social robots for consumer well-being during COVID-19 and beyond”, Journal of Service Management, Emerald Publishing, Vol. 31 No. 6, pp. 1131-1148.

Hennig‐Thurau, T. (2004), “Customer orientation of service employees: its impact on customer satisfaction, commitment, and retention”, International Journal of Service Industry Management, Vol. 15 No. 5, pp. 460-478.

Higgins, E.T., Roney, C.J.R., Crowe, E. and Hymes, C. (1994), “Ideal versus ought predilections for approach and avoidance distinct self-regulatory systems”, Journal of Personality and Social Psychology, American Psychological Association, Vol. 66 No. 2, pp. 276-286.

Hornyak, T. (2019), “Insanely humanlike androids have entered the workplace and soon may take your job”, CNBC, 31 October, available at: https://www.cnbc.com/2019/10/31/human-like-androids-have-entered-the-workplace-and-may-take-your-job.html (accessed 18 October 2021).

Huang, M.-H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, SAGE Publications, Vol. 21 No. 2, pp. 155-172.

Jacoby, J. and Kaplan, L.B. (1972), “The components of perceived risk”, in Venkatesan, M. (Ed.), SV – Proceedings of the Third Annual Conference of the Association for Consumer Research, Association for Consumer Research, Chicago, Illinois, pp. 382-393.

Kaipainen, K., Ahtinen, A. and Hiltunen, A. (2018), “Nice surprise, more present than a machine: experiences evoked by a social robot for guidance and edutainment at a city service point”, Proceedings of the 22nd International Academic Mindtrek Conference, Association for Computing Machinery, New York, New York, pp. 163-171.

Kidwell, B., Hardesty, D.M., Murtha, B.R. and Sheng, S. (2011), “Emotional intelligence in marketing exchanges”, Journal of Marketing, Vol. 75 No. 1, pp. 78-95.

Kim, S.S., Kim, J., Badu-Baiden, F., Giroux, M. and Choi, Y. (2021), “Preference for robot service or human service in hotels? Impacts of the COVID-19 pandemic”, International Journal of Hospitality Management, Vol. 93, 102795.

Lu, V.N., Wirtz, J., Kunz, W.H., Paluch, S., Gruber, T., Martins, A. and Patterson, P.G. (2020), “Service robots, customers and service employees: what can we learn from the academic literature and where are the gaps?”, Journal of Service Theory and Practice, Emerald Publishing, Vol. 30 No. 3, pp. 361-391.

Maslow, A. (1943), “A theory of human motivation”, Psychological Review, Vol. 50 No. 4, pp. 370-396.

McLeay, F., Osburg, V.S., Yoganathan, V. and Patterson, A. (2020), “Replaced by a robot: service implications in the age of the machine”, Journal of Service Research, Vol. 24 No. 1, pp. 104-121, doi: 10.1177/1094670520933354.

Meuter, M.L., Bitner, M.J., Ostrom, A.L. and Brown, S.W. (2005), “Choosing among alternative service delivery modes: an investigation of customer trial of self-service technologies”, Journal of Marketing, SAGE Publications, Vol. 69 No. 2, pp. 61-83.

Mozafari, N., Weiger, W.H. and Hammerschmidt, M. (2021), “Trust me, I'm a bot – repercussions of chatbot disclosure in different service frontline settings”, Journal of Service Management, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/JOSM-10-2020-0380.

Nahrgang, J.D., Morgeson, F.P. and Hofmann, D.A. (2011), “Safety at work: a meta-analytic investigation of the link between job demands, job resources, burnout, engagement, and safety outcomes”, Journal of Applied Psychology, Vol. 96 No. 1, pp. 71-94.

Nemeroff, C. and Rozin, P. (1994), “The contagion concept in adult thinking in the United States: transmission of germs and of interpersonal influence”, Ethos, American Anthropological Association, Wiley, Vol. 22 No. 2, pp. 158-186.

Palmatier, R.W. and Crecelius, A.T. (2019), “The ‘first principles’ of marketing strategy”, AMS Review, Vol. 9 No. 1, pp. 5-26.

Schmoch, U. (2007), “Double-boom cycles and the comeback of science-push and market-pull”, Research Policy, Vol. 36 No. 7, pp. 1000-1015, doi: 10.1016/j.respol.2006.11.008.

Spence, A.M. (1974), Market Signaling: Informational Transfer in Hiring and Related Screening Processes, 1st ed., Harvard University Press, Cambridge.

Story, V., Zolkiewski, J., Verleye, K., Nazifi, A., Hannibal, C., Grimes, A. and Abboud, L. (2020), “Stepping out of the shadows: supporting actors' strategies for managing end-user experiences in service ecosystems”, Journal of Business Research, Vol. 116, pp. 401-411.

The Guardian (2021), “‘Dystopian world’: Singapore patrol robots stoke fears of surveillance state”, The Guardian, 6 October, available at: https://www.theguardian.com/world/2021/oct/06/dystopian-world-singapore-patrol-robots-stoke-fears-of-surveillance-state (accessed 18 October 2021).

van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D. and Petersen, J.A. (2017), “Domo arigato Mr. Roboto: emergence of automated social presence in organizational frontlines and customers' service experiences”, Journal of Service Research, SAGE Publications, Vol. 20 No. 1, pp. 43-58.

Van Noort, G., Van Kerkhof, P. and Fennis, B. (2008), “The persuasiveness of online safety cues: the impact of prevention focus compatibility of web content on consumers' risk perceptions, attitudes, and intentions”, Journal of Interactive Marketing, Vol. 22, pp. 58-72, doi: 10.1002/dir.20121.

Westcott, B. (2021), New Zealand to Abandon Zero-Covid as Delta Proves Hard to Shake, CNN, available at: https://www.cnn.com/2021/10/05/asia/new-zealand-ardern-covid-zero-intl-hnk/index.html (accessed 18 October 2021).

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Emerald Publishing, Vol. 29 No. 5, pp. 907-931.

Yeung, R.M.W. and Morris, J. (2001), “Food safety risk: consumer perception and purchase behaviour”, British Food Journal, MCB UP, Vol. 103 No. 3, pp. 170-187.

Yoganathan, V., Osburg, V.-S., Kunz, H.W. and Toporowski, W. (2021), “Check-in at the Robo-desk: effects of automated social presence on social cognition and service implications”, Tourism Management, Vol. 85, 104309.

Acknowledgements

The authors would like to thank Eline Hottat for her work in searching, scanning, and categorizing real-life service robot implementations, which was part of her PhD thesis entitled “Toward a more balanced customer perspective on automated service interactions”. Both authors supervised this project. Eline did not aspire co-authorship of the current paper.

Corresponding author

Jeroen Schepers can be contacted at: J.J.L.Schepers@tue.nl

Related articles