Value of social robots in services: social cognition perspective

Martina Čaić (Department of Marketing and Supply Chain Management, Maastricht University, Maastricht, The Netherlands)
Dominik Mahr (Service Science Factory, Maastricht University, Maastricht, The Netherlands and Department of Marketing and Supply Chain Management, Maastricht University, The Netherlands)
Gaby Oderkerken-Schröder (Department of Marketing and Supply Chain Management, Maastricht University, Maastricht, The Netherlands)

Journal of Services Marketing

ISSN: 0887-6045

Article publication date: 18 June 2019

Issue publication date: 18 September 2019




The technological revolution in the service sector is radically changing the ways in which and with whom consumers co-create value. This conceptual paper considers social robots in elderly care services and outlines ways in which their human-like affect and cognition influence users’ social perceptions and anticipations of robots’ value co-creation or co-destruction potential. A future research agenda offers relevant, conceptually robust directions for stimulating the advancement of knowledge and understanding in this nascent field.


Drawing from service, robotics and social cognition research, this paper develops a conceptual understanding of the value co-creation/destruction potential of social robots in services.


Three theoretical propositions construct an iterative framework of users’ evaluations of social robots in services. First, social robots offer users value propositions leveraging affective and cognitive resources. Second, users’ personal values become salient through interactions with social robots’ affective and cognitive resources. Third, users evaluate social robots’ value co-creation/destruction potential according to social cognition dimensions.


Social robots in services are an emerging topic in service research and hold promising implications for organizations and users. This relevant, conceptually robust framework advances scholarly understanding of their opportunities and pitfalls for realizing value. This study also identifies guidelines for service managers for designing and introducing social robots into complex service environments.



Čaić, M., Mahr, D. and Oderkerken-Schröder, G. (2019), "Value of social robots in services: social cognition perspective", Journal of Services Marketing, Vol. 33 No. 4, pp. 463-478.



Emerald Publishing Limited

Copyright © 2019, Martina Čaić, Dominik Mahr and Gaby Oderkerken-Schröder.


Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at


Social robots, defined as fully or partially automated technologies that co-create value with humans through their social functionalities, represent a rapidly growing element of service industries, where they perform frontline tasks. Robots, thus, have moved from industrial settings (e.g. factories) to public (e.g. retail, hospitality and healthcare) and private (e.g. homes) user settings (International Federation of Robotics, 2015). They are no longer isolated in controlled, structured environments; modern robots must operate in chaotic, potentially complex human interactions, often with multiple stakeholders. This proliferation of social robots in efforts to deliver a superior customer experience (Mende et al., 2017; van Doorn et al., 2017) and euphoric predictions in industry reports (KPMG, 2016) conflict with the disappointments arising in real-world implementations. The key impediments to acceptance include unrealistic expectations and a lack of benefits for specific use contexts (Pino et al., 2015), suggesting that service providers need a better understanding of what constitutes a robot’s value proposition and how value might be realized for service beneficiaries. This research, therefore, investigates users’ evaluations of the value co-creation and co-destruction potential associated with social robots who engage in human-like behaviour in a service setting.

Social robots span diverse contexts, including domestic (Young et al., 2009), hospitality (e.g. humanoid assistance and welcoming) (Fan et al., 2016), entertainment (e.g. toys) (Robinson et al., 2014) and healthcare (e.g. assistive devices) (Green et al., 2016) sectors. The current study focusses on a critical segment of healthcare: elderly care. Most countries face challenges associated with ageing populations (United Nations, 2017) and the consumer segment of people at 60 years of age and older is expected to more than double by 2070 (European Commission, 2018). The growth in this segment has serious implications for healthcare, family structures, labour and financial markets. Social robots can help address some of the challenges in those service settings, such as shortages of elderly care staff, particularly if the robots can support and exhibit human-like behaviour resulting from the development and design of sophisticated systems that can express emotional sensitivity (affective resources) and engage in artificial intelligence (AI)-based learning (cognitive resources; KPMG, 2016). These developments transform robotic systems into what the current study defines as social robots in services, which feature autonomous systems that can understand social cues through facial and voice recognition technology and can interact with users in human-like manners (Čaić et al., 2018; KPMG, 2016). For example, in an elderly care scenario, we expect social robots to detect and respond to social cues and interact with patients, relatives and formal caregivers in a human-like manner using specific service functionalities, including fall detection, cognitive games, personal chats and exercise motivation.

Designing future technologies that can enable robots to reflect human values and enhance the well-being of consumers thus represents a top priority (Ransbotham et al., 2017). In addition, exploring ways that value might be co-created or co-destroyed in collaborative human–robot interactions can inform these robotic technology developments to ensure their full transformative potential. The plethora of social robots in services creates a need to identify ways to co-create, rather than co-destroy, value in symbiotic human–robot interactions (Marketing Science Institute, 2018), such as through research that determines how service beneficiaries appraise the value co-creation and co-destruction potential of social robots in services. Because value originates in the interaction between service actors and social robots, which takes place in the joint customer–provider sphere (Grönroos and Gummerus, 2014), service developers need to understand how a value interplay affects users’ attitudes and intentions to use robotic technologies.

A social cognition perspective might reveal how human actors perceive their human-like robotic counterparts in terms of two overarching dimensions that emerge during social interactions: competence (i.e. being skilful or efficacious) and warmth (i.e. being helpful and caring) (Fiske et al., 2007). The experiential, idiosyncratic nature of value (Vargo and Lusch, 2016) demands accounting for the risks of value co-destruction (e.g. lack of literacy, lack of personal touch and privacy intrusion); an advantage for one elderly person or a segment of elderly users might represent a disadvantage for others (Čaić et al., 2017). This could explain people’s lack of willingness to accept social robots in service settings (International Federation of Robotics, 2015). The complexity of value, together with the disruptive nature of social robots in services, creates unique challenges to existing service processes. No extant research details how organizations can integrate appropriate resources and designs for effective human–technology interactions to ensure value co-creation in such conditions.

Therefore, this study addresses these research gaps by applying a social cognition lens, which produces two major theoretical contributions. First, the proposed conceptualization of social robots in services and their value propositions, leveraging affective and cognitive resources, advances the overall understanding of service technology. Current research primarily has emphasized the appearance or feature-related characteristics of robots, such as their morphology and assistive tasks. In contrast, our conceptualization takes a value-centric perspective and focusses on resources necessary for collaborative value realizations. In that way, it establishes a basis for continued research into robotic technologies’ valuations and their influences on service interactions, representing a key question for both practice and research (Ostrom et al., 2015).

Second, this article advances scholarly understanding of valorization of human-like technologies by offering an iterative framework of how value gets proposed and realized through interactions between service actors and social robots. The framework synthesizes different theoretical perspectives on value (Gallarza et al., 2017) and proposes that prior personal values (Schwartz, 2012) become salient during context-specific user–robot interactions, in line with an experiential, idiosyncratic value perspective (Grönroos and Gummerus, 2014). By integrating value co-creation/destruction and their trade-offs, this study addresses the need for a holistic approach to technologies that can produce both advantages and disadvantages for different service actors. In that way, it addresses a recent call from Kaartemo and Helkkula (2018) for a better understanding of the influence of social robots on value co-creation.

Beyond theoretical contributions, this study offers practical insights for service managers regarding the emerging role of social robots and how to integrate them within complex service systems through appropriate combinations of affective and cognitive resources. Social robots represent complex systems of both hardware and software that can perform various services; their successful implementation hinges on understanding the potentially contradictory needs of diverse user segments and tailoring the robotic solutions accordingly before any costly roll-out effort. The proposed framework, along with managerial implications, offers insights with regard to designing and launching social robots in services in ways that can increase users’ acceptance.

Theoretical background

Social cognition perspective for robots

In human–human interactions, interpreting others’ mental states (e.g. intentions, affective states, beliefs and needs) enables people to thrive as social agents (Frith and Frith, 2007). Mentalizing or ascribing mental states to others to interpret and anticipate their actions (Frith and Frith, 2012) is a critical component of social life. Different elements and processes of social interaction (e.g. speech, facial expressions, eye gaze and body posture) allow people to make inferences and forms their social cognition (Adolphs, 1999). According to Gray et al. (2007), humans make these attributions by evaluating others’ capacities to sense and feel and to plan and act. Fiske et al. (2007) suggested that, when interacting with each other, humans seek to determine whether the other is a “friend or foe” (warmth dimension) and able to act on its either friendly or hostile intentions (competence dimension). The capacities to feel and to do thus are universal dimensions of social cognition.

Humans usually ascribe minds to other humans by detecting the social signals that indicate another person’s ability to perceive, feel and intend (Meltzoff, 2007). However, minds can also be assigned to non-human agents (e.g. computers, gadgets and robots; Abubshait and Wiese, 2017; Waytz et al., 2010), particularly, if their characteristics induce perceptions of intentionality. Reeves and Nass (1996), thus, find that people automatically treat computers as social beings. Furthermore, advanced technologies, which mimic human appearances and behaviours (Breazeal, 2004), allow such non-human agents to exhibit a convincing mixture of affect and intentionality. The increased sense that technologies have “minds of their own” has important implications for robotic design, indicating the relevance of robots exhibiting a human-like mind in addition to a human-like appearance (Waytz et al., 2014). Studies of human–robot interactions also suggest that human-like service robots offer a potentially meaningful context in which to study human social cognition (Chaminade and Cheng, 2009; Wiese et al., 2017; Wykowska et al., 2016). That is, the mechanisms of social cognition may be activated when people encounter cognitively and affectively endowed social robots, such that they judge these social partners, despite their status as non-human actors, according to their warmth (friendliness, kindness and caring) and competence (efficacy, skill and confidence) dimensions.

Initially, robotics literature emphasized the development of competence traits by upgrading robots’ cognitive resources and improving robots’ functionalities (Pineau et al., 2003). Recently, robotics studies have increasingly acknowledged the importance of warmth traits resulting from enhanced affective resources, such as eye contact (Johnson et al., 2014) or companionship (Broadbent et al., 2009). Sharkey and Sharkey (2011) suggested that two user segments (children and elderly people) might have a stronger tendency to anthropomorphize social robots, with important consequences for future robotic developments and ethical ramifications. It is therefore not surprising that descriptions of robotic designs seem to emphasize applications in health care and elderly care (Robinson et al., 2014).

Characteristics of social robots

Social robotics literature has most commonly agreed on four robot design characteristics (Bartneck and Forlizzi, 2004; Fong et al., 2003; Lee et al., 2016; Paauwe et al., 2015).

Embodiment. First, a system is embodied if it is structurally coupled with its environment such that a physical body is not required (Ziemke, 2003). This concept reflects the system’s relationship with its environment. A full review of different types of an embodiment is beyond the scope of this research (Ziemke, 2003), but the focus is on physically embodied robots with three-dimensional, physical bodies rather than virtual avatars or AI agents visible solely on a screen (Paauwe et al., 2015).

Morphology. Second, social robots’ physical bodies can take various forms, from machine-like to human-like (Lee et al., 2016). Fong et al. (2003) proposed four robot morphology types, namely: anthropomorphic (human-like), zoomorphic (animal-like), caricatured (cartoonish) and functional (an appearance that indicates the robot’s core functionality). Morphology relates closely to realism (or behavioural and visual fidelity; Paauwe et al., 2015) and the Uncanny Valley concept (Mori, 1970), which postulates that the more human-like a robot is (appearance, expressions and movements), the stronger a human observer’s affinity towards that robot, up to a point. Beyond that point, however, when the robot’s resemblance to humans is too high, human observers sense eeriness and intense repulsion towards the robot. The current study addresses users’ perceptions of a robot’s human-like mind and behaviour rather than appearance, though a human-like appearance can induce perceptions of mind (Waytz et al., 2010). Therefore, human-like value propositions, both affective and cognitive, should resonate with a generally human-like appearance.

Autonomy. Third, autonomy measures the degree of human intervention and support needed for the robot to function properly. Levels of autonomy can range from none, such that humans remotely control the robot through teleoperation (e.g. Wizard of Oz; Yanco and Drury, 2004), to full autonomy, where robots function without any direct input from humans (Bartneck and Forlizzi, 2004). Autonomous robots must be endowed with navigation, perception, speech, decision-making, self-maintenance and repair capabilities. This level of autonomy is difficult to achieve with current technology (Broadbent, 2017). The conceptualization in the current study proposes that social robots in services must function fully autonomously to reach the full potential of their affective and cognitive value propositions.

Assistive role. Fourth, an assistive role pertains to a service robot’s purpose and core tasks. In the focal elderly care context, the wide array of potential robot roles broadly refers to physical, psychosocial and cognitive assistance (Broadbent et al., 2009; Čaić et al., 2018). Robots aiding through physical assistance include rehabilitation robots and exoskeletons that can augment human physical weaknesses (Perry et al., 2007). For example, the robot bear lifts and carries bedridden or physically weak patients (Schwartz, 2015). Robots offering psychosocial assistance primarily attend to the emotional, psychological and social needs of patients (Čaić et al., 2018) via a companion or sociable partner role (Broekens et al., 2009), offering a connection on a more emotional level. The companion seal robot Paro can help alleviate symptoms of loneliness and depression through its emotional support (Robinson et al., 2014). Finally, robots that provide cognitive assistance issue reminders of elderly patients’ daily activities, medication and scheduled appointments, while also monitoring their overall health (Robinson et al., 2014). The human-like robot Pearl attends to needs of demented or cognitively impaired elderly people (Robinson et al., 2014).

Our conceptualization envisions that socially assistive robots, high on both cognitive and affective resources, should be able to perform all these assistive roles. The diverse typologies also suggest the need for further refinement of social robotics theory, as reflected in calls in robotics journals (Broadbent, 2017).

Services literature on robots

Existing typologies of social robots define social robot positioning relative to human actors. Accordingly, this study adopts a social cognition perspective, with the recognition that humans usually attribute affective (warmth) and functional (competence) elements to other actors, such as peers or caregivers, to assess their abilities (Fiske et al., 2007). To get an overview of the current work on (social) robots in services, this section reviews the literature focussing on the three service journals with the highest impact factors in 2017: Journal of Service Research, Journal of Service Management and Journal of Services Marketing. We searched the Web of Science using the search terms “robot” and “robot AND value” and did not limit our search to a specific time period, as robots in services is still a nascent research area. Table I presents an overview of existing literature on (social) robots in services. The table indicates whether a particular study paid implicit or explicit attention to social cognition’s core concepts of warmth and competence. By explicit we mean that the authors use the terms warmth and/or competence or refer to social cognition theory. By implicit we mean that the studies address affective or functional elements of the robot, without mentioning the terms of warmth and competence. In addition, the table shows, which articles address value related elements of the robots in services. We make a distinction between value co-creation and value co-destruction to demonstrate the prevailing focus on value in service robots.

Table I demonstrates that the first study addressing robots in services literature appeared in 2016. Since then, robots have increasingly entered services literature to illustrate the role of Artificial Intelligence (Huang and Rust, 2018) or map research directions (Rafaeli et al., 2017; Wirtz et al., 2018). It is interesting to note that most of the studies are conceptual in nature; relatively few present empirical findings, which is typical for an emerging topic. The majority of papers have emphasized the impact of developments in robotics on frontline service roles, but only a few studies have explicitly emphasized an elderly care services environment (Čaić et al., 2018; Khaksar et al., 2017). With respect to the social cognition lens we take, it is remarkable that all existing studies acknowledge affective and/or functional elements of robots in services, but only Wirtz et al. (2018), Van Doorn et al. (2017) Fan et al. (2016) explicitly referred to warmth and/or competence. Addressing the value lens of our current study, Table I indicates that 7 out of 11 studies address the robot’s role in creating value or in value co-creation. Only three studies acknowledge a potential threat to value creation (value co-destruction) of introducing robots in services settings. Overall, Table I summarizes that robotics is an emerging theme in services literature with promising implications for organizations, users and scholars.

Table I also shows which studies explicitly address the value that robots in services potentially add or co-create. Rafaeli et al. (2017) and Bolton et al. (2018) acknowledged that future research is needed for an integrated theoretical perspective on how value co-creation takes place within and across the digital, physical and social realms. In an elderly care services context, both Khaksar et al. (2017) and Čaić et al. (2018) presented that robots do co-create value with elderly people. This is in contrast with the observation of Keating et al. (2018), who indicated that services research typically assumes humans are responsible for the definition of value propositions and for value co-creation. Finally, Marinova et al. (2017) and Wirtz et al. (2018) acknowledged that in frontline services interactions robots could potentially provide value. Only three studies (Bolton et al., 2018; Čaić et al., 2018; Wirtz et al., 2018) recognized existing inhibitors that are necessary for value-creation and included risks of value co-destruction .

To summarize, Table I provides an overview of existing studies on robots in services. These studies implicitly acknowledged the role of affective and functional abilities of the robot without explicitly referring to social cognition theory. In addition, the literature review shows that most of these studies acknowledged that some kind of value co-creation takes place between robots and other actors, although the potential risk of value co-destruction is largely disregarded. The apparent relevance of warmth and cognition and the connection to value (co-creation), our paper proposes an iterative theoretical framework.

Conceptualizations of value

The concept of value has intrigued researchers from various disciplines for years; value is both an ambiguous and a pivotal concept for a wide range of theoretical frameworks, from utility theory (Fishburn, 1970) to customer value (Woodruff, 1997) to service-dominant logic (SDL) (Vargo and Lusch, 2004, 2016) and service logic (SL) (Grönroos and Voima, 2013). In the domain of (services) marketing, value has important epistemological implications for understanding customers’ cognitive, affective and behavioural responses to marketing stimuli (Homer and Kahle, 1988) and is a key determinant of competitive advantage (Parasuraman, 1997). However, existing value typologies and methodological approaches for capturing value remain abstract due to the multifaceted and complex nature of this concept (Gallarza et al., 2017).

A common distinction in marketing research cites customer (perceived) value versus personal values (Boksberger and Melsen, 2011; Woodruff, 1997). The former reflects the customer’s context-specific value attainment (e.g. value for money), while the latter pertains to desirable end-states that span customer segments and cultures (e.g. self-direction, hedonism, achievement and benevolence; Schwartz, 2012). Traditionally, marketing academics have focussed on customer perceived value, with several definitions (Holbrook, 1999; Woodruff, 1997; Zeithaml, 1988). Gallarza et al. (2017) identified three main approaches to customer perceived value, namely, trade-off, dynamic and experiential. While all three approaches have merits, over time, services marketing researchers have shifted their focus from the trade-off (the overall evaluation of utility after weighting “give” and “gets”; Zeithaml, 1988) towards the experiential (“interactive relativistic preference experience”; Holbrook, 1999, p. 5). The experiential approach supports broader conceptualizations of holistic value rather than being limited to mainly cognitive assessments, such that psychological foundations complement economic ones (Gallarza et al., 2017; Helkkula et al., 2012). Despite such on-going developments, marketers still call for further refinements of customer perceived value (Boksberger and Melsen, 2011; Gallarza et al., 2017). The current study proposes a value framework that can acknowledge and combine all three elements, that is, the trade-off, dynamic and experiential nature of value.

Furthermore, many marketing academics have acknowledged that context-specific values reflect the influence of higher-level, abstract, personal values (Woodruff, 1997; Zeithaml, 1988). Personal values are desirable end-goals (Rokeach, 1973) that serve as guiding principles navigating people through their everyday lives (Schwartz, 1992). Even though personal values are not contextually bound, when they become salient in certain contexts, they motivate actions and inform users’ needs, attitudes and behaviours (Homer and Kahle, 1988; Lages and Fernandes, 2005). The activation of personal values thus might enable users to realize both the benefits of value co-creation and the risk of value co-destruction in relation to a novel value proposition (Skålén et al., 2015).

Value propositions

The emergence of SDL and SL research has renewed interest in value and value co-creation. According to SDL, value is ‘an improvement in system well-being’ (Vargo et al., 2008, p. 149), whereas SL defines value as customers being or feeling better off than before engaging in a service (Grönroos and Gummerus, 2014). In both logics, value is idiosyncratic, experiential and contextual (Grönroos and Gummerus, 2014; Vargo and Lusch, 2016). Furthermore, both logics acknowledge that service providers cannot simply deliver value to service beneficiaries, but rather must invite them to engage their resources to co-create value by delivering compelling value propositions (Chandler and Lusch, 2015; Frow et al., 2016) or value promises (Grönroos and Gummerus, 2014). Value propositions hold strong promises for the involved actors, indicating that they can realize desired values through the activation of their own resources (Grönroos and Voima, 2013) or simply by engaging in co-creation. Realizing value-in-use or enhancing value entails improvements to customers’ subjective well-being, defined as “a person’s cognitive and affective evaluations of his or her life” (Diener et al., 2003, p. 63). This state combines cognitive and affective evaluations of the situation, so value propositions must go beyond the pure enumeration of offered services, as represented by a cognitive approach, and promise improved well-being by leveraging both cognitive and affective value propositions.

Value co-creation and co-destruction

Both SDL and SL also agree that value is not created for but rather is collaboratively created with customers (Grönroos and Voima, 2013; Vargo and Lusch, 2004, 2016). SDL proposes that value is always co-created through resource integration of multiple service actors (Vargo and Lusch, 2016); SL suggests that value is only co-created through direct interactions between actors in a joint sphere (Grönroos and Gummerus, 2014; Grönroos and Voima, 2013). Interactions can be face-to-face or through the mediation of smart technologies such as avatars or robots (Grönroos, 2017). According to SL, in the absence of direct interactions, service providers act as value facilitators and customers can independently create value within their customer sphere (Grönroos and Gummerus, 2014). To trigger interactions, service actors offer compelling value propositions that promise improved well-being. However, the outcome of the interaction does not guarantee greater well-being; service interactions can diminish well-being by destroying value for at least some actors (Echeverri and Skålén, 2011; Plé and Chumpitaz Cáceres, 2010).

Unlike early research that took an overly optimistic view and assumed enhanced value would always result from the collaborative integration of resources, recent SDL and SL contributions have acknowledged the potential for destructive phases that ultimately make a service actor worse off (Grönroos and Gummerus, 2014). Furthermore, though most research investigates actors’ perceptions of value enhancers and inhibitors after the service interaction (Baron and Harris, 2010; Zhu and Zolkiewski, 2015), many value co-destruction practices could be avoided if investigated earlier, before developing and introducing social robots to service contexts. This study seeks to understand how service offerings enable or prevent users from achieving desirable value outcomes (that is, value co-creation and value co-destruction) early in the service interaction process. In addition to studying the effects on value co-creation/destruction during or after the introduction of novel technology (Breidbach et al., 2013; Larivière et al., 2017), our iterative framework suggests studying expected value changes prior to technology development and deployment. In that way, many costly roll-outs could be avoided before being introduced to diverse service settings.

Context: Social robots in elderly care services

Robotic technologies can alter healthcare value networks and assist healthcare staff in surgeries, telepresence and preventive and chronic care (Ransbotham et al., 2017; Stone et al., 2016). The healthcare domain requires new, technology-enhanced service options and technological improvements promise to facilitate human well-being. In particular, robotic technologies offer potential tools for increasing the well-being of elderly consumers by providing consistent service delivery, constant availability and good reliability (Broadbent et al., 2009; Pineau et al., 2003). Yet, this complex setting also includes vulnerable stakeholders, sensitive data and inert institutions (Black and Gallan, 2015), provoking academic and public policy debates around the introduction of robots (Dell Technologies, 2018; Sharkey and Sharkey, 2012; Stone et al., 2016). Rather than predicting totally dystopian (e.g. loss of human touch, human obsolescence and privacy concerns) or utopian (e.g. panacea for social problems, unburdening overworked care staff and prolonged independent living) outcomes, this study considers a clearly specified situation in which socially assistive robots provide assistance to the elderly through human-like social interactions (Feil-Seifer and Matarić, 2005). In an elderly care context, socially assistive robots should become able to gauge social cues through voice and facial recognition technology to provide humans with health and safety monitoring services, social mediation and interactions and companionship (Broekens et al., 2009; Feil-Seifer et al., 2007). As the social dexterity of robots increases (e.g. listening, conversations and reading emotional cues), human expectations of robots’ warmth and competence capabilities should increase as well.

Acceptance of social robots

According to Pino et al. (2015), the most critical barriers to social robot acceptance reflect the mismatch between what is offered (i.e. value proposition) and what is needed, expected and valued. While the traditional technology acceptance model (TAM) (Davis, 1989) proved its usefulness in evaluating functional elements (e.g. perceived usability and ease of use) that might promote or impede intentions to use new technologies, it needs to be adjusted to reflect the human-like aspect of social robots, the social cognition users form about their robotic counterparts and additional influences on robot acceptance. Notable advancements of TAM in this regard account for social, emotional and/or relational elements that can exhibit influence on technology acceptance. For example, Heerink et al. (2010) extended TAM by including additional social elements (e.g. social presence and perceived sociability) along with trust, anxiety and perceived adaptivity. Wirtz et al. (2018) introduced the service robot acceptance model (sRAM), which further extends TAM by differentiating among functional (perceived ease of use, perceived usefulness, subjective social norms), socio-emotional (perceived humanness, perceived social interactivity, perceived social presence) and relational (trust and rapport) elements.

In addition to evaluating the sociability aspects of new technologies, the underlying mechanisms of technology acceptance (users’ needs, wants and values) need to be re-evaluated, as human-like technologies might tap into different aspects of value expectations than, for example, self-serving technologies (Meuter et al., 2005). The human-like appearance, minds and behaviours activating users’ social cognition mechanisms might make different personal values salient. Identifying different salient values across different user segments (e.g. elderly segment ‘X’ emphasizes benevolence as a dominant personal value, elderly segment ‘Y’ security and elderly segment ‘Z’ self-direction) can help to design future technologies that resonate better with users’ unique values and to advance technology acceptance models to account for such value-seeking variations. This paper proposes that understanding the motivation of (elderly) people to accept or reject social robots requires first embracing the interpretive research paradigm and uncovering the value-matching processes users undergo when presented with a new type of technology that can exhibit human-like behaviours. In that way, the proposed conceptualization addresses a recent call to advance the understanding of how new technologies affect value co-creation (Kaartemo and Helkkula, 2018).

Theoretical propositions

Congruence between value propositions and users’ values

The theoretical background of the paper emphasizes how:

  • robots’ value propositions promise an improved well-being;

  • subjective well-being can be assessed through cognitive and affective evaluations of one’s life;

  • users activate the mechanisms of social cognition when interacting with human-like social robots;

  • personal values are desirable end-states that when activated in context affect cognitive and affective evaluations (i.e. perceived value of the robot) and influence behaviours (i.e. robot acceptance); and

  • (v) users’ perceived value combines trade-off, dynamic, and experiential evaluations or the value co-creation and co-destruction potential of social robots in services.

Building upon the theoretical background, this article advocates for the value congruence between novel value propositions (e.g. cognitively and affectively endowed robots) and users’ personal values, which when activated affect the evaluation of value co-creation and co-destruction potential of social robots. We argue that a finer-grained understanding of users’ value-matching processes (Figure 1) can help disentangle factors that influence novel technology acceptance and can further advance the existing empirical models of assisting users’ acceptance of social technologies (e.g. TAM and sRAM). We initially built our argument focussing on the elderly due to their propensity to anthropomorphize technologies; however, we propose that the following model (Figure 1), theoretical propositions and iterative framework (Figure 4) also pertain to diverse user segments. Hence, we encourage other robotics and service researchers to study the generalizability of the proposed conceptualization in different service contexts.

Value co-creation/destruction potential of social robots in services

Designing robots that can exhibit human-like minds, in addition to their human-like bodies, suggests new opportunities to study prospective users’ social cognition, with meaningful consequences for predicting robot acceptance:


Social robots in services offer users value propositions leveraging affective and cognitive resources.

In terms of robots’ social capabilities, human-like social robots in services differ in their cognitive and affective resources. Figure 2 presents a matrix of social robots in services that differentiates their affective capacities (e.g. artificially programmed emotions, ability to mimic empathy and offer emotional support; Bolton et al., 2018; Keating et al., 2018; Wirtz et al., 2018) and cognitive capacities (e.g. ability to reason or to learn from past experiences). Quadrant 1 includes mechanic robots that are low on both affective and cognitive resources. They can perform repetitive, predictable, tedious tasks that humans hope to avoid; they cannot scale up their services, but instead, function solely within the boundaries of their programming. For example, shelf-scanning robots introduced by Walmart (Vincent, 2017) can move through store aisles and detect empty shelves that need to be re-stocked.

In Quadrant 2, the thinking robots are high on cognitive but low on affective resources. Such robots function in a calculating and rational manner and can make logical inferences to optimize processes. They can be valuable in service settings because their artificial reasoning skills can be enhanced through learning. Thinking robots promise to offer timely, reliable information and weigh all possible pros and cons before making a decision. For example, future thinking robots are expected to be capable of real-time big data processing and giving personalized medicine recommendations to patients based on their genomic–metabolic–microbiome profiles (Rijcken, 2018) without involving feelings in the process.

Quadrant 3 features feeling robots with high affective but low cognitive resources that can perceive human emotions and act accordingly. Leveraging their affective capabilities, these robots respond to human moods in personalized ways. For example, Breazeal’s (2001) Kismet can recognize and express emotions and promise therapeutic applications in child and elderly care, settings in which a rapport-building capability is important. Future feeling robots are expected to be capable of mimicking emphatic behaviours (e.g. emphatic listening) when interacting with human actors, but their actions will not necessarily be rational.

Finally, Quadrant 4 introduces robo-sapiens[1] with high cognitive and affective resources. This group combines the characteristics of both the feeling and thinking robots and should be able to mimic empathic behaviours towards their interaction partners while also providing meaningful solutions (Čaić et al., 2018). No compelling example currently exists of a robot that is autonomous, fully functional and capable of sensing, comprehending and acting in meaningful ways (Eyssel, 2017). That is, when interacting with a person, a feeling robot might detect sadness and attempt to provide comfort by playing soothing music; a thinking robot would evaluate the same person’s past medical history and provide the most appropriate remedy, such as a reminder to take anti-depression medication. Robo-sapiens should effectively combine both feature groups such that they would provide the most appropriate remedy for that person at the time while considering her or his emotional state, and perhaps, they would engage in comforting conversation before issuing the medication reminder.

In service contexts such as healthcare that require service providers to exhibit both affective and cognitive capabilities, social robots can co-create or co-destroy value. Due to the human-like behaviours of thinking and feeling robots and robo-sapiens, human interaction partners likely evaluate them similarly to a human service actor in terms of their value co-creation and co-destruction potential, such that they assess perceived warmth and competence:


Users’ personal values become salient through interactions with social robots’ affective and cognitive resources.

Leveraging the theoretical background, this study brings users’ personal values to the fore to consider ways they might be activated in context. In particular, this conceptualization builds on Schwartz’s (2012) theory of basic values that identifies the ten basic personal values in Figure 3.

These ten personal values are universal across cultures, but individuals and segments of people assign different importance to the different values to form their own priorities (Schwartz, 2012). For example, according to socio-emotional selectivity theory, older people assign more weight to emotionally meaningful goals such as benevolence (Carstensen, 1992). Personal values steer people’s actions in context, mostly through unconscious processes. However, if they experience a conflict among different value priorities, their evaluations of the potential outcomes of engaging in a certain action become more conscious (Rokeach, 1973; Schwartz, 2012). Through a novel interaction with a robot’s affective and cognitive resources, such that they become acquainted with its value propositions, users likely perceive the greater salience of some personal values. If the robot’s ability to assist suggests that it might augment or replace human actors, users might recognize values such as health, self-respect, privacy and sense of belonging. The robot’s affective resources should make emotion-infused value items more salient; its cognitive resources likely make functional value items more salient. In addition, a social robot might cause value conflicts, such that elderly users might recognize the value of self-direction if the robot helps them remain more independent but at the expense of a loss of a sense of privacy:


Users evaluate social robots’ value co-creation and co-destruction potential according to the dimensions of social cognition.

Building on the assumption that users may experience conflict across different personal values, the proposed conceptualization suggests that users engage in a mental trade-off (Zeithaml, 1988) to evaluate the social robot’s potential for helping them achieve their cherished, salient and personal values, both in terms of likely value co-creation potential and the risk of value co-destruction potential. These evaluations should capture both affective and cognitive appraisals of value through lived and anticipated experiences (Helkkula et al., 2012). Because of the social robot’s human-like capabilities, users likely activate social perceptions and evaluate the robot according to the warmth and competence dimensions of social cognition. The non-human actors can signal their warmth through their affective resources and their competence through their cognitive resources. Such cues encourage users to anthropomorphize the non-human actors and evaluate their instrumentality by using these warmth and competence dimensions. Referring back to a previous example, a sense of a loss of privacy might be evaluated according to warmth (e.g. “Does the robot genuinely care for me? Is it collecting data only to help me?”) and competence (e.g. “Can the robot protect my private data?”) dimensions.

Discussion and implications

Theoretical implications

This article introduces a value-centric conceptualization of social robots according to a resources-focussed view of technology (Kaartemo and Helkkula, 2018; Vargo and Lusch, 2016). While acknowledging the variety of existing robot design criteria, this conceptualization prioritizes a social cognition perspective and suggests that users evaluate social robots on their affective and cognitive resources, in addition to their human-like appearance, that induces mind perceptions (Gray et al., 2007). Such perceptions of a human-like mind arise from both direct interactions with the social robot in a joint sphere (Grönroos and Gummerus, 2014) and predicted or imagined experiences prior to the actual interaction (Helkkula et al., 2012). Although this article focusses specifically on robotics, the resulting insights may have implications for other technologies that act as social entities too, such as voice-based personal assistants. By emphasizing warmth and competence as universal dimensions of social cognition (Fiske et al., 2007), this conceptualization establishes a social psychology perspective on technology-enhanced service interactions. Continued research, in turn, should include human social-cognitive processes to clarify similarities and differences in human–human versus human–robot value co-creation/destruction practices. Such efforts will have particular importance for frontline research and for investigating, which complementary or substitutive roles technology might offer humans in the future (Huang and Rust, 2018; van Doorn et al., 2017; Wirtz et al., 2018).

The proposed iterative framework also conceptually broadens perspectives on value, in that it combines trade-off, dynamic and experiential approaches to value (Gallarza et al., 2017) and links personal and context-specific values. As a bridge between basic personal values and context-specific value co-creation/destruction potential in value attainment, this framework demonstrates how various types of value are intertwined; they should not be analyzed independently, but rather require an integrative approach (Schwartz, 2012). In addition, this study emphasizes the importance of studying both positive value co-creation and negative value co-destruction possibilities associated with human–robot resource integration (Čaić et al., 2018). Considering the many destructive consequences of technology for personal values, such as a loss of privacy, personal data leaks or monitoring concerns (Sharkey and Sharkey, 2012), further research should address this point in more detail. The proposed conceptualization also reveals the dynamic, iterative nature of the alignment between offered value propositions and desired value outcomes. For service researchers, iterations are inevitable for forming value propositions that resonate with users’ values (Chandler and Lusch, 2015). Therefore, researchers should take a long-term, rather than short-term, perspective on value co-creation/destruction, such that they first gain a better understanding of the social-cognitive mechanisms that users use when interacting with social robots (van Doorn et al., 2017; Wykowska et al., 2016) before attempting to develop a set of meaningful cognitive and affective value propositions.

Managerial implications

The fourth industrial revolution that is currently underway features new technologies (e.g. Social robots, Artificial intelligence and Internet of Things) that are radically changing how and with whom people interact (KPMG, 2017). Even if most service organizations recognize the transformative power of new technologies (Marinova et al., 2017), many struggles to understand customer value and how to develop value propositions that will resonate with it (Payne et al., 2017). Our value-centric approach enables service organizations and their partners tasked with developing robotic services to gain a better sense of which future value co-creation opportunities are more likely to benefit users (Grönroos, 2017).

Map robotic functionalities according to cognitive and affective value. A social robot’s value proposition should be an invitation issued to a diverse set of users to engage with the robot’s configuration of affective and cognitive resources. To leverage these resources, service managers can ask users to map the robot’s functionalities (e.g. recognizing user’s moods or alerting caregivers in case of an emergency) according to the different quadrants of the proposed robot typology in Figure 2. Such a mapping activity would reveal, which functionalities need to be included in the system to achieve a particular type of social robot in services. Managers should decide, which robot type is most relevant for their context and understand the implications of their decisions for users.

Use interpretative approaches for eliciting personal value. Value propositions also can activate a spectrum of personal values (Schwartz, 2012) that become salient through either an initial interaction or anticipated interaction with the robot. As emphasized previously, personal values tend to be unconscious, so managers need a broader methods toolbox that includes more interpretative approaches (e.g. user narratives, generative interviews and phenomenographic interviews; Čaić et al., 2018; Helkkula et al., 2012). These techniques allow a deeper view into users’ underlying drivers of technology acceptance (e.g. through laddering during interviews) and demand qualitatively coding user insights against the personal values as presented in Figure 3. The analysis provides service managers with information about which values get activated as users gain familiarity with robotic service actors. Such methods might be applied during trials with social robots to complement and augment observational and survey data.

Beware of and design around value trade-offs. This study argues that users accept or reject the value proposition of the robotic service depending on the positive and negative evaluations of the robot’s instrumentality for users’ value attainment. For example, the robot’s reminder function to take medicine can enhance the goal of independence by relieving users trying to recall medicine intake, which they no longer can perform autonomously, while at the same time decreasing their feeling of independence as they then become reliant on the reminder service. Thus, users engage in a mental trade-off of value co-creation/destruction potential. To deal with the trade-off, service managers need to determine stepwise for each robotic component (e.g. functionalities, interface and aesthetics) the positive and negative impact for the user’s warmth and competence perception and potentially forgo components if the disadvantages outweigh the benefits. This systematic impact assessment of each robotic component serves as input for the robot developer.

Develop the service offerings from social robots iteratively. The developed framework of value co-creation/destruction potential of social robots (Figure 4) offers service managers a stepwise process for developing service offerings that enable users to achieve desired value outcomes and avoid others. Performing the process through multiple iterations allows starting with some incremental offerings (e.g. one functionality) to obtain user feedback before adjusting the robotic design and adding further incremental offerings during the next iteration. This iterative approach is increasingly common in designing services (Mahr et al., 2013) because it reduces the risk of mis-development inherent to all-inclusive solutions and increases the likelihood of user acceptance of the radically new technology.

Figure 4 depicts the iterative process with three steps that each link a theoretical proposition to a managerial implication.

Future research agenda

This study focusses on social robots in elderly care services, reflecting the societal relevance of their use in this sector, as well as the growing challenges associated with elderly user segments. However, there are plentiful opportunities for research that undertake further explorations of social robots in other health-care services and frontlines in general. Table II provides a list of suggested research topics. The suggested research questions are organized according to the three theoretical propositions that are at the core of Figure 4. It does not attempt to be exhaustive; rather, this list offers directions for researchers interested in social robots in services, value-related phenomena, service experiences and social cognition.


This conceptual paper focusses on the value of social robots in services, an area which is still in its infancy. The paper argues that because of their human-like appearance, minds and behaviours activating the social cognition mechanisms of users, different personal values might become activated compared to interacting with non-human technologies. The three propositions resulting from our study are:

  • social robots in services offer users systems of value propositions leveraging affective and cognitive resources;

  • users’ personal values become salient through interactions with social robots’ affective and cognitive resources; and

  • users evaluate social robots’ value co-creation and co-destruction potential according to perceived warmth and competence of the robot.

A future research agenda based on these three propositions offers relevant, conceptually robust directions for stimulating the advancement of knowledge and understanding of the influences of social robots on users’ values and value co-creation. As the domain of service robots is still nascent, we trust that this future research agenda inspires scholars and practitioners alike to explore the value co-creation potential and co-destruction challenges of social robots through a social cognition lens of users in diverse service settings.


Congruence between value propositions and users’ values

Figure 1

Congruence between value propositions and users’ values

Robot typology based on cognitive and affective resources

Figure 2

Robot typology based on cognitive and affective resources

Basic personal values stimulated by social robots in services

Figure 3

Basic personal values stimulated by social robots in services

Iterative framework of theoretical propositions and managerial implications for social robots in services

Figure 4

Iterative framework of theoretical propositions and managerial implications for social robots in services

Services literature on robots

Warmth Competence Value co-creation Value co-destruction
Authors Explicitly Implicitly Explicitly Implicitly
Bolton et al. (2018) Robots capable of exhibiting natural- appearing social qualities such as cuddling Robots are already showing promise in assembling IKEA furniture without human assistance Robots can be seen as a service innovation that bring new value propositions Tensions exist that, unless addressed, may inhibit the conditions necessary for value co-creation between robots and other actors in a network
Machines cannot demonstrate empathy, but it is possible to create robots that display signs of empathy There is a need for an integrated theoretical perspective on how value co-creation takes place within and across the digital, physical and social realms
Čaić et al. (2018) Robots can offer social contact as a supporting function Robots can offer an enabler role, which is based on the robot’s competence Robots present a value co-creation potential Robots present a value co- destruction potential
Fan et al. (2016) Robots can offer human-like traits such as warmth as a result of anthropomorphism
Huang and Rust (2018) Robots have the potential to offer intuitive and empathetic intelligence Robots have the potential to offer mechanical and analytical intelligence
Keating et al. (2018) Robots do not have emotional intelligence, only artificially programmed emotions Service research mainly takes a humanistic approach with limited attention for technology, assuming that humans are responsible for the definition of value propositions and for value co-creation
Khaksar et al. (2017) The PaPeRo robot is a robot that is designed for emotional interaction Elderly perceive value from co-creating with social robots
Marinova et al. (2017) Robots that can avoid FLEs’ personal mood fluctuations and on the positive side enhance social interaction and engagement (relational, social) Robots have the potential to offer instrumental and epistemic qualities Smart technologies (i.e. robots) substitute or complement frontline employees to coproduce value for the frontline employee and/or customer
Rafaeli et al. (2017) The emotional competence of an employee can be far superior to technology (i.e. robot) when delivering such ‘‘bad news’’ Robot is introduced in frontline research emphasizing both warmth and competence Future research is needed on psychological mechanisms that contribute to the transfer of technological functionalities to customers’ value experience
Singh et al. (2017) Humanoid robots have the potential to enable interactions that are sufficiently rich with social content and cues to lubricate service interactions with humans Frontline interfaces (i.e. robots) have the potential to offer knowledge and problem-solving abilities
Van Doorn et al. (2017) Social cognition (warmth and competence) mediates automated social presence’s effect on customer service outcomes Social cognition (warmth and competence) mediates automated social presence’s effect on customer service outcomes
Wirtz et al. (2018) Warmth and competence are two fundamental dimensions of social perception that together account almost entirely how people characterize others Emotional tasks of frontline actors Warmth and competence are two fundamental dimensions of social perception that together account almost entirely how people characterize others Cognitive tasks of frontline actors At airports robot to assist passengers could provide customer value During the service encounter, customers often place a premium on pleasant relations with service employees and so providing emotional and social value. However, by 2020, it is estimated that 85 per cent of all customer interactions will take place without a human agent

Future research agenda

Potential research topic Selected questions
Social robots in services offer users value propositions leveraging affective and cognitive resources Using insights from different network actors, is it possible to establish relevant design criteria for developing valuable robotic services? In addition to human-like appearance and behaviour, what are other important social robot design criteria?
What are determinants of robotic solutions that are simultaneously customizable (to customer needs), (technically) feasible, and viable (with a network operations model)?
In which service contexts are robots’ affective resource more important? In which contexts are cognitive resources more important? How do different configurations of cognitive and affective resources affect the perceptions of the humanness of robots?
Will social robots ever be capable of genuine emotions? If so, how will empathy and authentic companionship obtained from social robots affect service experiences?
Can the transformative potential of human-like robots be optimized by addressing the interplay of physical-psychosocial-cognitive health?
Users’ personal values become salient through interactions with social robots’ affective and cognitive resources Which value priorities are predominant in different service contexts and different customer segments?
Do value priorities change when moving from a focal actor level to a value constellation level (micro to meso/macro)?
How to translate the complexity of contradicting user expectations and value priorities into the design of social robots?
Do salient personal values differ depending on whether users anticipate (prior to technology development) or actually experience interactions with social robots in services?
Users evaluate social robots’ value co-creation and co-destruction potential according to the dimensions of social cognition Which element of social cognition, warmth or competence, is predominant when evaluating service interactions with social robots? In human–human interactions, warmth takes precedence (Fiske et al., 2007); in human–robot interactions, the dynamics of warmth and competence still need to be addressed
Do social robots build trust primarily through competence or warmth?
How do users’ evaluations of value co-creation/destruction potential change over time?
How to address varying evaluations of value co-creation and value co-destruction potential among the diverse set of stakeholders? This is particularly important for network services, in which value co-creation for one actor might imply value co-destruction for another
How to design human-robot interactions that will maximize the network well-being?
Which ethical considerations (e.g. privacy, dehumanization, social deprivation and lack of agency) affect evaluations of value co-creation/destruction potential?
What effects do social robots have on users’ and service providers’ roles in value co-creating networks? How do they affect evaluations of social robots? Do novel role distributions and role-related tasks affect the quality of service co-created with an automated actor?



Inspired by the book Robo sapiens: Evolution of a new species by Menzel and d'Aluisio (2001)


Abubshait, A. and Wiese, E. (2017), “You look human, but act like a machine: agent appearance and behavior modulate different aspects of human–robot interaction”, Frontiers in Psychology, Vol. 8, pp. 1393, doi: 10.3389/fpsyg.2017.01393.

Adolphs, R. (1999), “Social cognition and the human brain”, Trends in Cognitive Sciences, Vol. 3 No. 12, pp. 469-479.

Baron, S. and Harris, K. (2010), “Toward an understanding of consumer perspectives on experiences”, Journal of Services Marketing, Vol. 24 No. 7, pp. 518-531.

Bartneck, C. and Forlizzi, J. (2004), “A design-centred framework for social human-robot interaction”, Proceedings of 13th IEEE International Workshop on Robot and Human Interactive Communication, pp. 591-594.

Black, H.G. and Gallan, A.S. (2015), “Transformative service networks: cocreated value as well-being”, The Service Industries Journal, Vol. 35 Nos 15/16, pp. 826-845.

Boksberger, P.E. and Melsen, L. (2011), “Perceived value: a critical examination of definitions, concepts and measures for the service industry”, Journal of Services Marketing, Vol. 25 No. 3, pp. 229-240.

Bolton, R.N., McColl-Kennedy, J.R., Cheung, L., Gallan, A., Orsingher, C., Witell, L. and Zaki, M. (2018), “Customer experience challenges: bringing together digital, physical and social realms”, Journal of Service Management, Vol. 29 No. 5, pp. 776-808.

Breazeal, C. (2001), “Affective interaction between humans and robots”, European Conference on Artificial Life, Springer, Berlin, Heidelberg, pp. 582-591.

Breazeal, C. (2004), “Social interactions in HRI: the robot view”, IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews)), Vol. 34 No. 2, pp. 181-186.

Breidbach, C.F., Kolb, D.G. and Srinivasan, A. (2013), “Connectivity in service systems: does technology-enablement impact the ability of a service system to co-create value?”, Journal of Service Research, Vol. 16 No. 3, pp. 428-441.

Broadbent, E. (2017), “Interactions with robots: the truths we reveal about ourselves”, Annual Review of Psychology, Vol. 68, pp. 627-652.

Broadbent, E., Stafford, R. and MacDonald, B. (2009), “Acceptance of healthcare robots for the older population: review and future directions”, International Journal of Social Robotics, Vol. 1 No. 4, pp. 319-330.

Broekens, J., Heerink, M. and Rosendal, H. (2009), “Assistive social robots in elderly care: a review”, Gerontechnology, Vol. 8 No. No. 2, pp. 94-103.

Čaić, M., Odekerken-Schröder, G. and Mahr, D. (2018), “Service robots: value co-creation and co-destruction in elderly care networks”, Journal of Service Management, Vol. 29 No. 2, pp. 178-205.

Čaić, M., Mahr, D., Odekerken-Schröder, G., Holmlid, S. and Beumers, R. (2017), “Moving towards network-conscious service design: leveraging network visualisations”, Touchpoint; the Journal of Service Design, Vol. 9 No. 1, pp. 60-63.

Carstensen, L.L. (1992), “Social and emotional patterns in adulthood: support for socioemotional selectivity theory”, Psychology and Aging, Vol. 7 No. 3, pp. 331-338.

Chaminade, T. and Cheng, G. (2009), “Social cognitive neuroscience and humanoid robotics”, Journal of Physiology-Paris, Vol. 103 Nos 3/5, pp. 286-295.

Chandler, J.D. and Lusch, R.F. (2015), “Service systems: a broadened framework and research agenda on value propositions, engagement, and service experience”, Journal of Service Research, Vol. 18 No. 1, pp. 6-22.

Davis, F.D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, pp. 319-340.

Dell Technologies (2018), “Realizing 2030: a divided vision of the future”, available at: (accessed 28 June 2018).

Diener, E., Oishi, S. and Lucas, R.E. (2003), “Personality, culture, and subjective well-being: emotional and cognitive evaluations of life”, Annual Review of Psychology, Vol. 54 No. 1, pp. 403-425.

Echeverri, P. and Skålén, P. (2011), “Co-creation and co-destruction: a practice-theory based study of interactive value formation”, Marketing Theory, Vol. 11 No. 3, pp. 351-373.

European Commission (2018), “The 2018 ageing report: underlying assumptions & projections”, available at: (accessed 17 June 2018).

Eyssel, F. (2017), “An experimental psychological perspective on social robotics”, Robotics and Autonomous Systems, Vol. 87, pp. 363-371.

Fan, A., Wu, L. and Mattila, A.S. (2016), “Does anthropomorphism influence customers’ switching intentions in the self-service technology failure context?”, Journal of Services Marketing, Vol. 30 No. 7, pp. 713-723.

Feil-Seifer, D. and Matarić, M.J. (2005), “Defining socially assistive robotics”, 2005 IEEE Proceedings of the 9th International Conference on Rehabilitation Robotics, Chicago, pp. 465-468.

Feil-Seifer, D., Skinner, K. and Matarić, M.J. (2007), “Benchmarks for evaluating socially assistive robotics”, Interaction Studies, Vol. 8 No. 3, pp. 423-439.

Fishburn, P.C. (1970), Utility Theory for Decision Making, No. RAC-R-105, Research Analysis Corp, McLean, VA.

Fiske, S.T., Cuddy, A.J. and Glick, P. (2007), “Universal dimensions of social cognition: warmth and competence”, Trends in Cognitive Sciences, Vol. 11 No. 2, pp. 77-83.

Fong, T., Nourbakhsh, I. and Dautenhahn, K. (2003), “A survey of socially interactive robots”, Robotics and Autonomous Systems, Vol. 42 Nos 3/4, pp. 143-166.

Frith, C.D. and Frith, U. (2007), “Social cognition in humans”, Current Biology, Vol. 17 No. 16, pp. R724-R732.

Frith, C.D. and Frith, U. (2012), “Mechanisms of social cognition”, Annual Review of Psychology, Vol. 63, pp. 287-313.

Frow, P., McColl-Kennedy, J.R. and Payne, A. (2016), “Co-creation practices: their role in shaping a health care ecosystem”, Industrial Marketing Management, Vol. 56, pp. 24-39.

Gallarza, M.G., Arteaga, F., Del Chiappa, G., Gil-Saura, I. and Holbrook, M.B. (2017), “A multidimensional service-value scale based on Holbrook’s typology of customer value: bridging the gap between the concept and its measurement”, Journal of Service Management, Vol. 28 No. 4, pp. 724-762.

Gray, H.M., Gray, K. and Wegner, D.M. (2007), “Dimensions of mind perception”, Science (New York, N.Y.), Vol. 315 No. 5812, p. 619.

Green, T., Hartley, N. and Gillespie, N. (2016), “Service provider’s experiences of service separation: the case of telehealth”, Journal of Service Research, Vol. 19 No. 4, pp. 477-494.

Grönroos, C. (2017), “On value and value creation in service: a management perspective”, Journal of Creating Value, Vol. 3 No. 2, pp. 125-141.

Grönroos, C. and Gummerus, J. (2014), “The service revolution and its marketing implications: service logic vs service-dominant logic”, Managing Service Quality: An International Journal, Vol. 24 No. 3, pp. 206-229.

Grönroos, C. and Voima, P. (2013), “Critical service logic: making sense of value creation and co-creation”, Journal of the Academy of Marketing Science, Vol. 41 No. 2, pp. 133-150.

Heerink, M., Kröse, B., Evers, V. and Wielinga, B. (2010), “Assessing acceptance of assistive social agent technology by older adults: the Almere model”, International Journal of Social Robotics, Vol. 2 No. 4, pp. 361-375.

Helkkula, A., Kelleher, C. and Pihlström, M. (2012), “Characterizing value as an experience: implications for service researchers and managers”, Journal of Service Research, Vol. 15 No. 1, pp. 59-75.

Holbrook, M.B. (1999), “Introduction to consumer value”, in Holbrook, M.B. (Ed.), Consumer Value: A Framework for Analysis and Research, Routledge, London, pp. 1-28.

Homer, P.M. and Kahle, L.R. (1988), “A structural equation test of the value-attitude-behavior hierarchy”, Journal of Personality and Social Psychology, Vol. 54 No. 4, pp. 638-646.

Huang, M.H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, Vol. 21 No. 2, pp. 155-172.

International Federation of Robotics (2015), “World robotics survey: service robots are conquering the world”, available at: (accessed 29 June 2018).

Johnson, D.O., Cuijpers, R.H., Juola, J.F., Torta, E., Simonov, M., Frisiello, A., Bazzani, M., Yan, W., Weber, C., Wermter, S. and Meins, N. (2014), “Socially assistive robots: a comprehensive approach to extending independent living”, International Journal of Social Robotics, Vol. 6 No. 2, pp. 195-211.

Kaartemo, V. and Helkkula, A. (2018), “A systematic review of artificial intelligence and robots in value co-creation: current status and future research avenues”, Journal of Creating Value, Vol. 4 No. 2, pp. 1-18.

Keating, B.W., McColl-Kennedy, J.R. and Solnet, D. (2018), “Theorizing beyond the horizon: service research in 2050”, Journal of Service Management, Vol. 29 No. 5, pp. 766-775.

Khaksar, S.M.S., Shahmehr, F.S., Khosla, R. and Chu, M.T. (2017), “Dynamic capabilities in aged care service innovation: the role of social assistive technologies and consumer-directed care strategy”, Journal of Services Marketing, Vol. 31 No. 7, pp. 745-759.

KPMG (2016), Social Robots: 2016's New Breed of Social Robots Is Ready to Enter Your World, KPMG Advisory N.V, Arnhem.

KPMG (2017), The Changing Nature of Work: The Relevant Workforce, KPMG International.

Lages, L.F. and Fernandes, J.C. (2005), “The SERPVAL scale: a multi-item instrument for measuring service personal values”, Journal of Business Research, Vol. 58 No. 11, pp. 1562-1572.

Larivière, B., Bowen, D., Andreassen, T.W., Kunz, W., Sirianni, N.J., Voss, C., Wünderlich, N.V. and De Keyser, A. (2017), “Service encounter 2.0”: an investigation into the roles of technology, employees and customers”, Journal of Business Research, Vol. 79, pp. 238-246.

Lee, H.R., Šabanović, S. and Stolterman, E. (2016), “How humanlike should a social robot be: a user-centered exploration”, 2016 AAAI Spring Symposium Series.

Mahr, D., Kalogeras, N. and Odekerken-Schröder, G. (2013), “A service science approach for improving healthy food experiences”, Journal of Service Management, Vol. 24 No No. 4, pp. 435-471.

Marinova, D., de Ruyter, K., Huang, M.H., Meuter, M.L. and Challagalla, G. (2017), “Getting smart: learning from technology-empowered frontline interactions”, Journal of Service Research, Vol. 20 No. 1, pp. 29-42.

Marketing Science Institute (2018), “Research priorities 2018-2020”, Marketing Science Institute, Cambridge, available at: (accessed 13 July 2018).

Meltzoff, A.N. (2007), “Like me’: a foundation for social cognition”, Developmental Science, Vol. 10 No. 1, pp. 126-134.

Mende, M., Scott, M.L., van Doorn, J., Shanks, I. and Grewal, D. (2017), “Service robots rising: how humanoid robots influence service experiences and food consumption”, Marketing Science Institute Working Paper Series 2017, Report No. 17-125.

Menzel, P. and d'Aluisio, F. (2001), Robo Sapiens: Evolution of a New Species, MIT Press, MA.

Meuter, M.L., Bitner, M.J., Ostrom, A.L. and Brown, S.W. (2005), “Choosing among alternative service delivery modes: an investigation of customer trial of self-service technologies”, Journal of Marketing, Vol. 69 No. 2, pp. 61-83.

Mori, M. (1970), “The uncanny valley”, Energy, Vol. 7 No. 4, pp. 33-35.

Ostrom, A.L., Parasuraman, A., Bowen, D.E., Patricio, L., Voss, C.A. and Lemon, K. (2015), “Service research priorities in a rapidly changing context”, Journal of Service Research, Vol. 18 No. 2, pp. 127-159.

Paauwe, R.A., Hoorn, J.F., Konijn, E.A. and Keyson, D.V. (2015), “Designing robot embodiments for social interaction: affordances topple realism and aesthetics”, International Journal of Social Robotics, Vol. 7 No. 5, pp. 697-708.

Parasuraman, A. (1997), “Reflections on gaining competitive advantage through customer value”, Journal of the Academy of Marketing Science, Vol. 25 No. 2, pp. 154-161.

Payne, A., Frow, P. and Eggert, A. (2017), “The customer value proposition: evolution, development, and application in marketing”, Journal of the Academy of Marketing Science, Vol. 45 No. 4, pp. 467-489.

Perry, J.C., Rosen, J. and Burns, S. (2007), “Upper-limb powered exoskeleton design”, IEEE/ASME Transactions on Mechatronics, Vol. 12 No. 4, pp. 408-417.

Pineau, J., Montemerlo, M., Pollack, M., Roy, N. and Thrun, S. (2003), “Towards robotic assistants in nursing homes: challenges and results”, Robotics and Autonomous Systems, Vol. 42 Nos 3/4, pp. 271-281.

Pino, M., Boulay, M., Jouen, F. and Rigaud, A.S. (2015), “Are we ready for robots that care for us? Attitudes and opinions of older adults toward socially assistive robots”, Frontiers in Aging Neuroscience, Vol. 7, pp. 1-15.

Plé, L. and Chumpitaz Cáceres, R. (2010), “Not always co-creation: introducing interactional co-destruction of value in service-dominant logic”, Journal of Services Marketing, Vol. 24 No. 6, pp. 430-437.

Rafaeli, A., Altman, D., Gremler, D.D., Huang, M.H., Grewal, D., Iyer, B., Parasuraman, A. and de Ruyter, K. (2017), “The future of frontline research: invited commentaries”, Journal of Service Research, Vol. 20 No. 1, pp. 91-99.

Ransbotham, S., Kiron, D., Gerbert, P. and Reeves, M. (2017), “Reshaping business with artificial intelligence: closing the gap between ambition and action”, MIT Sloan Management Review, available at: (accessed 28 June 2018).

Reeves, B. and Nass, C.I. (1996), The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places, Cambridge university press, New York, NY.

Rijcken, C. (2018), “The final pharmacist?”, LinkedIn, available at: (accessed 26 February 2018).

Robinson, H., MacDonald, B. and Broadbent, E. (2014), “The role of healthcare robots for older people at home: a review”, International Journal of Social Robotics, Vol. 6 No. 4, pp. 575-591.

Rokeach, M. (1973), The Nature of Human Values, Free Press, New York, NY.

Schwartz, R. (2015), “Meet the robot bear who will care for Japan's elderly”, available at: (accessed 7 July 2018).

Schwartz, S.H. (1992), “Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries”, In Advances in Experimental Social Psychology, Vol. 25, Academic Press, Boston, pp. 1-65.

Schwartz, S.H. (2012), “An overview of the Schwartz theory of basic values”, Online Readings in Psychology and Culture, Vol. 2 No. 1, doi: 10.9707/2307-0919.1116.

Sharkey, A. and Sharkey, N. (2011), “Children, the elderly, and interactive robots”, IEEE Robotics & Automation Magazine, Vol. 18 No. 1, pp. 32-38.

Sharkey, A. and Sharkey, N. (2012), “Granny and the robots: ethical issues in robot care for the elderly”, Ethics and Information Technology, Vol. 14 No. 1, pp. 27-40.

Singh, J., Brady, M., Arnold, T. and Brown, T. (2017), “The emergent field of organizational frontlines”, Journal of Service Research, Vol. 20 No. 1, pp. 3-11.

Skålén, P., Gummerus, J., von Koskull, C. and Magnusson, P.R. (2015), “Exploring value propositions and service innovation: a service-dominant logic study”, Journal of the Academy of Marketing Science, Vol. 43 No. 2, pp. 137-158.

Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S. and Leyton-Brown, K. (2016), “Artificial intelligence and life in 2030. One hundred year study on artificial intelligence”, Report of the 2015-2016 Study Panel.

United Nations (2017), “World population prospects: key findings & advance tables”, available at: (accessed 6 July 2018).

van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D. and Petersen, J.A. (2017), “Domo arigato Mr Roboto: emergence of automated social presence in organizational frontlines and customers’ service experiences”, Journal of Service Research, Vol. 20 No. 1, pp. 43-58.

Vargo, S.L. and Lusch, R.F. (2004), “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68 No. 1, pp. 1-17.

Vargo, S.L. and Lusch, R.F. (2016), “Institutions and axioms: an extension and update of service-dominant logic”, Journal of the Academy of Marketing Science, Vol. 44 No. 1, pp. 5-23.

Vargo, S.L., Maglio, P.P. and Akaka, M.A. (2008), “On value and value co-creation: a service systems and service logic perspective”, European Management Journal, Vol. 26 No. 3, pp. 145-152.

Vincent, J. (2017), “Walmart is using shelf-scanning robots to audit its stores”, the verge; 2017”, available at: (accessed 20 February 2018).

Waytz, A., Gray, K., Epley, N. and Wegner, D.M. (2010), “Causes and consequences of mind perception”, Trends in Cognitive Sciences, Vol. 14 No. 8, pp. 383-388.

Waytz, A., Heafner, J. and Epley, N. (2014), “The mind in the machine: anthropomorphism increases trust in an autonomous vehicle”, Journal of Experimental Social Psychology, Vol. 52, pp. 113-117.

Wiese, E., Metta, G. and Wykowska, A. (2017), “Robots as intentional agents: using neuroscientific methods to make robots appear more social”, Frontiers in Psychology, Vol. 8, Article 1663.

Wirtz, J., Patterson, P., Kunz, W., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Vol. 29 No. 5, pp. 907-931.

Woodruff, R.B. (1997), “Customer value: the next source for competitive advantage”, Journal of the Academy of Marketing Science, Vol. 25 No. 2, pp. 139-153.

Wykowska, A., Chaminade, T. and Cheng, G. (2016), “Embodied artificial agents for understanding human social cognition”, Philosophical Transactions of the Royal Society B, Vol. 371 No. 1693.

Yanco, H.A. and Drury, J. (2004), “Classifying human-robot interaction: an updated taxonomy”, Proceeding of 2004 IEEE International Conference on Systems, Man and Cybernetics, Vol. 3, pp. 2841-2846.

Young, J.E., Hawkins, R., Sharlin, E. and Igarashi, T. (2009), “Toward acceptable domestic robots: applying insights from social psychology”, International Journal of Social Robotics, Vol. 1 No. 1, pp. 95-108.

Zeithaml, V.A. (1988), “Consumer perceptions of price, quality, and value: a means-end model and synthesis of evidence”, The Journal of Marketing, Vol. 52 No. 3, pp. 2-22.

Zhu, X. and Zolkiewski, J. (2015), “Exploring service failure in a business-to-business context”, Journal of Services Marketing, Vol. 29 No. 5, pp. 367-379.

Ziemke, T. (2003), “What’s that thing called embodiment?”, Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 25 No. 25, pp. 1305-1310.

Further reading

Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009), “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots”, International Journal of Social Robotics, Vol. 1 No. 1, pp. 71-81.

Dautenhahn, K. (2007), “Socially intelligent robots: dimensions of human–robot interaction”, Philosophical Transactions of the Royal Society B: Biological Sciences, Vol. 362 No. 1480, pp. 679-704.

Dautenhahn, K., Woods, S., Kaouri, C., Walters, M.L., Koay, K.L. and Werry, I. (2005), ““What is a robot companion-friend, assistant or butler?”, IROS 2005 Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Edmonton, pp. 1192-1197.

Parasuraman, A. and Colby, C.L. (2015), “An updated and streamlined technology readiness index: TRI 2.0”, Journal of Service Research, Vol. 18 No. 1, pp. 59-74.

Ray, C., Mondada, F. and Siegwart, R. (2008), ““What do people expect from robots?”, IROS 2008 Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, pp. 3816-3821.

Roy, N., Baltus, G., Fox, D., Gemperle, F., Goetz, J., Hirsch, T., Margaritis, D., Montemerlo, M., Pineau, J., Schulte, J. and Thrun, S. (2000), “Towards personal service robots for the elderly”, Workshop on Interactive Robots and Entertainment (WIRE 2000), Vol. 25.


This project received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642116. The information and views set out in this study are those of the author(s) and do not necessarily reflect the official opinion of the European Union. Neither the European Union institutions and bodies nor any person acting on their behalf may be held responsible for the use which may be made of the information contained therein.

Corresponding author

Martina Čaić can be contacted at:

About the authors

Martina Čaić is a PhD Candidate at the Department of Marketing and Supply Chain Management at Maastricht University and Marie Curie Fellow at the Service Design for Innovation Network (SDIN). Her research focusses on customer experiences in value networks, with a particular interest in robotic and ambient assisted living technologies.

Dominik Mahr is Professor of Digital Innovation & Marketing at Maastricht University. He is also a Scientific Director of the Service Science Factory. His research centers on recent marketing and innovation phenomena including open innovation, virtual social communities, online innovation brokers, customer co-creation, service design and the development of new services. He has published in high impact journals. Prior to his academic career, Dr Mahr worked for several years in different management and marketing consultancies, operating in industries such as high tech, automotive, real estate and insurance.

Gaby Oderkerken-Schröder is Professor in customer-centric service science at Maastricht University, the Netherlands. Her main research interests are service innovation, relationship management, customer loyalty and service failure and recovery. She is a partner of the European consortium Service Design for Innovation, and her research has been published in Journal of Marketing, MISQ, Journal of Retailing, Journal of Service Research, Journal of Service Management, Journal of Business Research, etc.

Related articles