The vision of robotics in the home promises increased convenience, comfort, companionship and greater security for users. The robot industry risks causing harm to users, being rejected by society at large or being regulated in overly prescriptive ways if robots are not developed in a socially responsible manner. The purpose of this paper is to explore some of the challenges and requirements for designing responsible domestic robots.
The paper examines definitions of robotics and the current commercial state of the art. In particular, it considers the emerging technological trends, such as smart homes, that are already embedding computational agents in the fabric of everyday life. The paper then explores the role of values in design, aligning with human computer interaction, and considers the importance of the home as a deployment setting for robots. The paper examines what responsibility in robotics means and draws lessons from past home information technologies. An exploratory pilot survey was conducted to understand user concerns about different aspects of domestic robots such as form, privacy and trust. The paper provides these findings, married with literature analysis from across technology law, computer ethics and computer science.
By drawing together both empirical observations and conceptual analysis, this paper concludes that user centric design is needed to create responsible domestic robotics in the future.
This multidisciplinary paper provides conceptual and empirical research from different domains to unpack the challenges of designing responsible domestic robotics. In doing this, the paper seeks to bridge the gap between the normative dimensions of how responsible robots should be built, and the practical dimensions of how people want to live with them in context.
Urquhart, L., Reedman-Flint, D. and Leesakul, N. (2019), "Responsible domestic robotics: exploring ethical implications of robots in the home", Journal of Information, Communication and Ethics in Society, Vol. 17 No. 2, pp. 246-272. https://doi.org/10.1108/JICES-12-2018-0096Download as .RIS
Emerald Publishing Limited
Copyright © 2019, Emerald Publishing Limited
1. Introduction: the robots are coming
The vision of robotics in the home promise increased convenience, comfort, companionship and greater security for users. However, the reality, and impact on users, may not always meet this vision. Fears of robot uprisings are peppered throughout decades of science fiction literature and film (Higbie, 2013). However, visions of technological futures often say more about the period they were written in, than actually forecasting what futures might emerge (Reeves, 2012), as we have seen with computer science research into “ubicomp” (Bell and Dourish, 2006). Whilst popular science and cultural visions of robots may not have fully emerged, computational agents have most definitely left the lab and entered daily life in a variety of forms. The Internet of Things (IoT) is incrementally making homes smarter by embedding networked, ambient technologies with varying degrees of autonomy into the physical and social fabric of domestic life. These devices can be for security (smart CCTV and locks), comfort (smart bulbs and thermostats) and entertainment (conversational agents in smart speakers). These artefacts may not all be “robots” in the popular sense of the word, but they are restructuring interactions, social order and relationships in the home. As domestic service robot technologies advance and become more commercially accessible, the smart home will have already changed the domestic setting and laid the groundwork for robots to assimilate. Accordingly, they need to learn from mistakes being made with smart homes, including being designed in more user centric ways. It is important to understand user concerns and respond to these accordingly, to create a more sustainable domestic robot future.
Our paper structure firstly explores changing definitions of domestic robots before considering human computer interaction perspectives on value sensitive, user centric and contextually aware design in the home. Secondly, we unpack the nature of responsibility, arguing roboticists need to understand and respond to user concerns. This often does not occur currently, creating technologies unfit for purpose and disruptive to the social order of the home. We conclude by presenting user concerns from our small-scale exploratory survey, focusing particularly on trust, privacy and form of robots as key hurdles for creating responsible domestic robotics. We do this to bridge the gap between the normative dimensions of how responsible robots should be built, and the practical dimensions of how people want to live with them in context.
Standards are a good place to start navigating a definition of domestic robots, as they can show what multiple stakeholder consensus is around a topic. The International Federation of Robotics/United Nations Economic Commission for Europe were influential in classifying robots, culminating in the ISO standard 8373:2012 on Robots and Robotic devices. This standard differentiates between, among others, industrial, mobile, service, personal service and professional service robots. According to them, a robot is:
[…] an actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks. Autonomy in this context means the ability to perform intended tasks based on current state and sensing, without human intervention (ISO 8373, s2.08).
We focus on “service robots”, which are “robot[s] that perform useful tasks for humans or equipment excluding industrial automation applications” (ISO 8373, s2.10) and particularly the sub category of “personal service robots”; “service robots for personal use […] used for a non-commercial task, usually by lay persons […] (i.e.) domestic servant robot, automated wheelchair, personal mobility assist robot, and pet exercising robot” (ISO 8373, s2.11). As we can see, these definitions foreground the materiality of the artefact (i.e. being able to actuate physically), the varying degrees of autonomy they possess to shape the environment, the relationship of utility to humans and the split between industrial and personal.
If we look more widely, by turning to academic sources we see robots framed slightly differently. For Mataric (2007, p. 2) “a robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals”. Bryson and Winfield state robots are “artefacts that sense and act in the physical world in real time” and they state a smartphone counts as a robot as it can sense when its falling or orientation changes (Bryson and Winfield, 2017, p. 117). Both definitions encapsulate the ability to act in the physical world, but don’t necessarily prescribe the robots as being physical themselves. In providing a more design orientated definition for domestic robots, Bartneck and Forlizzi (2004, p. 2) highlight the interactional aspects, stating “a domestic robot is an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioural norms expected by the people with whom the robot is intended to interact”. This definition foregrounds the interactional aspect, and particularly to what extent robots fit into pre-existing norms and contexts. All of the above perspectives feature in EU legal discussions around civil liability for robots, which recommend defining “smart robots” by focusing on the attributes of:
– the capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data;
– the capacity to learn through experience and interaction;
– the form of the robot’s physical support;
– the capacity to adapt its behaviour and actions to the environment. (European Parliament, 2017, p. 18)
However, in defining robots, neatly separating them from interactive AI becomes a challenge, e.g. human-agent collectives, IBM Watson, Google Duplex, DeepMind AlphaGo etc. Whilst some definitions above focus on the physicality of robots, they do not exclude non-physical, more ethereal robots that actuate in the real world. Given the current trend towards smart homes with integration of more ethereal devices not providing physical interactions, but cognitive support, there is a case for considering interactive AI too. This includes search functionality (conversational agents in different devices, like Amazon Alexa), heating management (smart thermostats like Nest) or observational security of space (Nest Cam).
It is worth briefly reflecting on definitions of AI as digital artefacts that have intelligence, i.e. “capacity to perceive contexts for action […] to act […] to associate contexts to actions” using techniques like speech or pattern recognition (Bryson and Winfield, 2017, p. 117). This wider framing encapsulates many of the domestic IoT technologies. Accordingly, this paper considers interactive artificial intelligence in addition to more material framings of robots.
3. Commercial state of the art
Irrespective of definitions, there are various degrees of agency and artificial intelligence emerging in the domestic setting. The number of service robots in the home is growing at an impressive rate globally. The International Federation of Robotics 2017 Report on World Robotics states there was an annual increase in sales of personal and domestic service robotics from 2015-2016 of 24 per cent to roughly 6.7m robots, with the market valued at US$2.6bn (International Federation of Robotics [IFR], 2017, p. 14). Interestingly, US companies dominate as manufacturers of domestic service robots, whereas 94 per cent of elderly/handicap assistance bots come from Asian/Australian companies (IFR, 2017, p. 18).
The current domestic service robot market includes both start-ups and major manufacturers’ offerings. Companies creating new robot products include Honda’s 3E family of modular robot platforms for assisting with mobility and even sports training (Honda, 2018); Panasonic’s desktop companion robot complete with child-like voice to “add realism” (Panasonic, 2017); and Bosch’s Mykie which helps with cooking and projecting recipes onto the wall (Clark Thompson, 2017). Humanoid robots are hitting the market too, performing tasks as diverse as conducting funerals (Gibbs, 2017), teaching yoga (Ubtech Robotics Lynx [Gebhard, 2018]) and personal videography (Kuri, 2018).
For macro level insights, IFR (2017, p. 14) states more than 4.6 million domestic robots sold in 2016, are for “vacuum and floor cleaning, lawn-mowing robots, and entertainment and leisure robots, including toy robots, hobby systems, education and research”. In Appendix 1 we provide our non-exhaustive analysis of the domestic robot market, as of May 2018. We now turn to human-computer interaction (HCI) to understand why the home is a complex deployment setting for domestic robots.
4. The importance of the home
One of our key arguments is the growth in domestic internet of things technologies, the so-called smart home, is paving the way for domestic robots. However, the process of integrating IoT into the home has impacts for residents living with these devices. There is a growing interface between domestic IoT and robots. Robots that manage and speak to IoT devices act as mediators for users and intermediaries for services, providing more intuitive interactions between user and devices e.g. LGs Cloi (Kelion, 2018). Companies such as Amazon, already established in homes through Alexa/Echo, have robotics aspirations too (Gurman and Stone, 2018). We need to learn from the mistakes that are currently being made in terms of responsibility for privacy, security and trust with IoT, to ensure these are not being replicated for robots.
Like robots, IoT has moved from the lab to the home and the consumer market has grown hugely in recent years (Cisco, 2013; Panetta, 2017). Ownership of domestic IoT devices is anticipated to rise significantly, with the OECD predicting by 2022, a family of four will own two connected cars, seven smart light bulbs, five internet-connected power sockets, one intelligent thermostat and so on (OECD, 2013). Whilst these predictions may be optimistic, they are no longer constrained to visions like Weiser’s ubiquitous computing (Weiser, 1993) or Philips Ambient Intelligence (Aarts and Marzano, 2003). An IoT future is here, just perhaps not the one originally envisioned of invisible computers and seamless networking (Bell and Dourish, 2006). However, many market offerings are for goods or services individuals do not know they even want or need (Lee et al., 2017).
Domestic technologies being developed without regard for what users actually want and neglecting domestic routines and social practices has been an established challenge in HCI. (Rodden and Benford, 2003; Tolmie et al., 2002; Crabtree and Rodden, 2004). Smart homes deployments, for example, implement an instrumentalist visions whilst neglecting interests of users (Leppënen and Jokinen, 2003). Wilson (2015) found that whilst benefits like increased efficiency, comfort, convenience, energy management, care, security are promised, designers need to look at “how the use and meaning of technologies will be socially constructed and iteratively negotiated, rather than being the inevitable outcome of assumed functional benefits” (p. 466) because homes are “internally differentiated, emotionally loaded, shared and contested places” (p. 470). Furthermore, as with smart homes, a key challenge domestic robots need to address is the notion of users, and how data is managed within the context of shared home occupancy. Co-constructed domestic data often opaquely represent shared digital daily life, impacting meaningful control by occupants of this information (Goulden et al., 2018). There are also issues with domestic politics around managing systems (who has authority to change the smart thermostat controls) and limitations of law (such as data protection rights being individualised and the household exemption in Article 2(2) GDPR) (Tolmie and Crabtree, 2018). These factors need to be considered in going beyond single user control over data and devices towards finding more collective, shared mechanisms (Flintham et al., 2019).
Numerous studies examine how individuals live with smart domestic technologies. A US study found user’s frustration caused by unreliability, devices requiring iterative tweaking over time and security concerns about unauthorised remote access (particularly for locks and home cameras) (Brush et al., 2011). Users still desired such technologies, and in a recent study, on user perceptions of privacy risks in IoT, they find users still purchase these devices, despite privacy concerns, showing the privacy paradox continues with IoT (Williams et al., 2017). Mäkinen (2016) found internal tensions for 13 residents around trade-offs with home surveillance systems in Finland, for example balancing a sense of safety and protection of the home against fear of being watched without knowledge or implications of monitoring other home occupants, such as perceived spying. More recently, Coskun et al. (2018) explored reasons why smart home technologies don’t have greater uptake. They found elements like users want smart home technology to take over chores, but not for automation to interfere with pleasurable activities, such as cooking, or going beyond comfort to improving skills such as cooking. This shows the contested nature of domestic life, and the need to respond to context and users through user centric design approaches. These lessons from smart homes could inform domestic roboticists to support design of systems users actually want.
5. Responsible robot (icists)? The challenges of domestic robots
Robots pose numerous ethical challenges for privacy rights, security management, trust relationships, identity formation and limitations on user autonomy (Coeckelbergh, 2012; Leenes and Lucivero, 2014). As IoT paves the way for domestic robots, security and privacy vulnerabilities are arising (Brown, 2015). This can be unintended, such as publicly accessible unsecured IoT devices with video feeds enabling data to move outside of contextually appropriate boundaries of the home (Nissenbaum, 2009; Osborne, 2016; Wetmore, 2018). Similarly, it can be intended, driven by business models of data repurposing, such as Roomba selling floor plans of user homes (Jones, 2017). Private practices are often made visible in the process of human robot interaction, and data about these practices is used as a resource in the provision of new value-added services with robots, hence perceptions of robots and interactive AI monitoring for surveillance are established (Calo, 2010; Sharkey and Sharkey, 2012; Schafer and Edwards, 2017). Inferences about behaviour based on social sorting (Lyon, 2003) of data doubles (Haggerty and Ericson, 2000) can be used for social control and manipulation, with the home setting making intimate behaviours observable and auditable. Opacity around the ecosystem of stakeholders interested in knowing how users live makes it hard to know if and why they are being watched: is it to monetise, police or manage their actions? Examples of products being pulled because of privacy concerns, particularly with child users e.g. Mattel Aristotle (Hern, 2017), highlight the public perception of such risks.
Accordingly, a 2015 Eurobarometer study on Autonomous systems (EU, 2015) found “Eight in ten Europeans (82 per cent) who use robots think well of them, while nine in ten (90 per cent) among them would purchase one”. A more recent 2017 Eurobarometer study (EU, 2017) of c28,000 EU citizens, states that 35 per cent would be comfortable with robot support at work or delivering goods, but only 26 per cent when it is for companionship or services when elderly/infirm or for performing an operation. They find, “overall, 88 per cent of respondents agree robots and artificial intelligence are technologies that require careful management.” (EU, 2017). Therefore, we now consider questions of responsibility for one stakeholder group in particular: roboticists.
6. The nature of responsibility and the role of roboticists
Responsibility is a loaded concept, having different meanings for different communities, morally, legally and societally. This paper is interested in the responsibilities of roboticists, as opposed to robots themselves. The influential EPSRC Principles of Robotics recognises this divide, targeting their principles towards designers, builders and users of robots. They argue “Robots are simply tools of various kinds, albeit very special tools, and the responsibility of making sure they behave well must always lie with human beings.” (Boden et al., 2017, p. 125).
Accordingly, by considering responsibilities of roboticists we turn to existing work on the how innovators address their wider responsibilities to society. The case for engineers and developers duty to look beyond function is that “engineering design is an inherently moral activity” (Verbeek, 2006, p. 368) and “in effect, engineers ought to be considered de facto policymakers, a role that carries implicit ethical duties” (Millar, 2008, p. 4). In foregrounding the needs of users in this, Shneiderman called on “researchers, system designers, managers, implementers, testers and trainers of user interfaces and information systems” to exert influence, moral leadership and responsibility to find “ways to enable users to accomplish their personal and organisational goals whilst pursuing higher societal goals and serving human needs” (Shneiderman, 1990, p. 2).
One popular tool to support exercise of responsibility is codes of ethics. Professional bodies such as the Association of Computing Machinery (ACM), Institute of Electrical and Electronic Engineers (IEEE) and British Computing Society (BCS) have provide general guidance for members for many years (IEEE, 1963, ACM, 1992). The IEEE Code of Ethics, for example, asks members to consider how their work impacts quality of life of others, and introduces broader notions of responsibility to public welfare, safety and health. However, there are specific codes emerging for robots and interactive AI too. These range from Asilomar AI Principles and ACM US Public Policy Council on Algorithmic Transparency and Accountability to Japanese Society for AI and Montreal Declaration for Responsible AI. Within these, the concepts of accountability, respect for human values, privacy and safety, among others, recur (Winfield, 2017). More recently, the high-profile IEEE Ethically Aligned Design Version 2 proposed ethical commitments to human rights, well-being, accountability, transparency and awareness of misuse. A key element the report raises is around legal liabilities. Similarly, Principle 2 of the EPSRC Principles states “Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws, fundamental rights and freedoms, including privacy” (Boden et al., 2017).
This charge is being taken up by the EU, where efforts to establish civil law liabilities around robotics are underway (European Parliament, 2017). This includes a proposed Code of Ethical Conduct for Robotics Engineers (European Parliament, 2017). Again, it provides utmost importance to the principles of dignity, privacy and safety of humans. But it also includes principles on designing robots to respect fundamental rights, the precautionary principle, inclusiveness, accountability, safety, reversibility, privacy and maximising benefit/minimising harm. Whilst all can be important for the home, the notes on privacy are particularly interesting as they highlight the need for designers, particularly around obtaining valid consent prior to man-machine interactions. This is a clear challenge for human robot interactions, of communicating sufficient information to users in a transparent, temporally sensitive manner.
There is need for operationalisation and strategies to embed such values in design. A Responsible Research and Innovation approach is important here, as it focuses on practical reflection and interaction with a range of stakeholders, to ensure stewardship for the future, going beyond high level aspirations (Von Schomberg, 2011, p. 9). Reflexivity of designers on their position, knowledge and impact is key but a sense of responsibility can depend on if designers are doing more applied or fundamental research (Grimpe et al., 2014). Thus, responsibilities within roboticist communities can be fragmented due to their role.
Drawing on its role in human computer interaction, the role for human values in design is growing, as the “third wave” of HCI widens the field to consider cultural, societal aspects of computing, as opposed to purely functional aspects (Bødker, 2015). Authors such as Nissenbaum (2005); Flanagan et al. (2008) and Sellen et al. (2009) highlight the need for bringing values into design. Sellen et al. (2009, p. 66) argue for user centricity in the design process, stating:
HCI must also take into account the truly human element, conceptualising ‘users’ as embodied individuals who have desires and concerns and who function within a social, economic and political ecology.
In bringing values into design, “value sensitive design” (VSD) (Friedman et al., 2008; Friedman et al., 2017) has been a key framework, trying to make “moral values part of technological design, research and development.” (Van den Hoven, 2006, p. 67).
Within the VSD framework, values with ethical import should be brought into the design process, i.e. values that “a person or group of people consider important in life” (Friedman et al., 2008, p. 70), such as those that “centre on human well-being, human dignity, justice, welfare and human rights” (Friedman et al., 2008, p. 1180). Criticisms of VSD have focused on “what values”, and bringing more situated, local values into design, not just high level values, such as those in the codes of ethics above (Le Dantec et al., 2009). Accordingly, for responsible domestic robotics, user centric design strategies such as design ethnographies (Crabtree et al., 2012) and co-design (Steen, 2011) are critical to understanding the real needs and values users want in domestic robotics. There is growing recognition of the importance of these concepts in robotics too, as one prominent definition of human robot interactions states, there is need to “meet the social and emotional needs of their individual users as well as respecting human values” (Dautenhahn, 2018). Furthermore, there are already examples of use of VSD for robotics, particularly care robots (van Wynsberghe, 2013b, 2013a). Recognising the need to understand user concerns, we conducted a short survey which we now present.
7. Presenting the survey
The small-scale pilot survey was constructed to establish views and concerns of the general public around the emergence of domestic robots. It was informed by existing literature, from a breadth of disciplines considering legal and ethical matters around robots e.g. law, philosophy, computer science, engineering science fiction. The survey adopted a broad view of “domestic robots” to include interactive artificial intelligence, to capture existing new technology such as Alexa and other personal assistants that respondents may have experience of using. The survey was approved by University of Nottingham Computer Ethics process, ran from 6th – 28th March 2018, and was shared primarily though social media channels (namely Twitter, Facebook and Reddit). Of the 43 respondents to this survey, 18 were identifiably male and 18 female, one non-binary and six remained anonymous. There was an age spread from teenage to over 70 years old with a concentration from 20-45 years old. The survey was broken into three broad themes, namely, general feelings and experience with robots; trust and interaction; future thinking. This enabled us to establish current understanding and exposure to robots before exploring views on future usage, ethical guidance and trust in more depth. The findings can be summarised under the following themes.
General Feelings and Experiences: Existing technology such as Alexa, Google Home and other domestic robots such as Roomba (robotic hoover) had only been experienced by just over 30 per cent of all respondents with the remainder citing cost (36.7 per cent) dislike (23.3 per cent) and others (50 per cent). For those who cited others, the main recurring theme was lack of trust/privacy issues as the reasons for lack of engagement, as one participant stated, “I don't see a tremendous amount that these devices could do to improve our family life at the moment; certainly not enough to justify the cost and personal data implications”.
Privacy and informational harms are major concerns, and when asked to state two fears from the introduction of domestic robots almost 75 per cent of all participants cite concerns around covert listening/privacy/hacking. The challenge for the domestic robot industry will become managing the privacy trade-off for consumers. The responses made clear that people were aware of their privacy being traded for the benefits conveyed by AI/domestic robots but striking that balance where the robot’s duties outweigh the loss of privacy has yet to be achieved in the majority of respondent’s opinions.
On the positive side, 50 per cent of 42 respondents were at least slightly positive about the increase in Artificial Intelligence in the home. Additionally, the most cited two benefits from the introduction of domestic robots among the 43 respondents are time saving/convenience and companionship/care. When all 43 respondents were asked about a range of tasks for future domestic robots to perform, among others, the top roles were cleaning (95.3 per cent); washing/ironing (79.1 per cent) and medical/care (69.8 per cent). Somewhat contradictorily, in a later question asking what robots should not do, 54.8 per cent of the 31 question respondents felt that robots should not be allowed to have childcare/parenting/care roles, which appears contradictory.
Future Thinking: Respondents were wary of providing domestic robots with legal rights. Only 16.3 per cent of all respondents felt that domestic robots should have any rights protected by law although the sentience of the robot was recognised largely as the deciding factor. This implies that respondents are not yet comfortable with robots having legal rights, although by pointing to sentience as a variable for legal protection, where robots sit in relation to other species, such as animals, will be a legal challenge for the future.
Independence in ethical control for robots was key, with 78.6 per cent of 42 respondents wanting an independent body to be responsible for controlling the any ethical rules driving domestic robots. Interestingly, government was the second most approved controller (57.1 per cent), perhaps demonstrating an inherent trust in governmental control (or perhaps showing a lack of trust in other options). An interesting comment that backed this up is:
I don't trust companies to regulate themselves at all, not under capitalism where they aim to make profits, see Uber evading police controls, Volkswagen messing about with the Diesel fumes, Facebook not caring about people abusing data mined from their service – it’s a mess.
The physical form of the domestic robot matter, and in this survey almost 60 per cent of 42 respondents felt that the physical form a domestic robot takes makes a difference. Some stated, “it’s still a machine regardless” and “they perform a function so looks are irrelevant” whilst another stated “the more human like it is the more users are likely to regard it as human. This can have advantages and disadvantages.” This recognises the impacts of the form domestic robots take remains an unsettled domain. However, robot form (humanoid or not) links back to discussion of robot rights, and more broadly, robot personhood, which is an ongoing debate in the EU (European Parliament, 2017) and wider academic circles (Darling, 2016; Schafer, 2016). As such personhood would also enable responsibility to be passed from roboticist to robot this remains a contested point (Delcker, 2018).
Trust and Interaction: Over 95 per cent of all respondents either do not implicitly trust (51.2 per cent) or don’t know (44.2 per cent) about trusting domestic robots. Some felt machines can fail, errors can be made in programming and they are susceptible to hacking accurately replicates the lack of trust highlighted earlier. Surprisingly, despite the lack of trust and recognition that domestic robots are only machines, 74.4 per cent of all respondents felt that robots could help with feelings of social isolation.
This exploratory survey whilst not large scale, provides indicative topics for further reflection, particularly around the issue of trust, which we unpack further below.
7.1 Unpacking the themes
Trust is key because, as Holder et al. argue, “user acceptance will be critical to uptake [of robots] and acceptance will be based on trust” (Holder et al., 2016, p. 384). Various aspects of trust are considered below, before considering safety, privacy, transparency and control.
Human Robot Interaction and Trust: To address the shift from industrial robots to domestic robots that can “communicate with environment, follow human social norms, and mimic human abilities.” (Haidegger et al., 2013, p. 1216), better understanding of how users live and is needed. The field of Human – Robot Interaction (HRI) has emerged, “dedicated to understanding, designing, and evaluating robotic systems for use by or with humans” (Goodrich and Schultz, 2007, p. 204). Mataric (2007) set out a comprehensive list of human robot interaction orientated challenges, similar to those outlined above, around safety, privacy, attachment and trust. Attachment is interesting for the domestic environment, as users become attached to their robots. “Roomba users already refuse to have their Roombas replaced when they need repair, insisting on getting the same one back. What happens when the robot is much more interesting, intelligent, and engaging than the Roomba?” (Mataric, 2007, pp. 285-286).
Not all users are so attached, and from an interactional perspective, the line between trustworthiness and distrust can be tenuous (Mataric, 2007). Whilst Wagner (2009), shows that studies indicate humans tend to trust and confide in robots, in contrast, Pagallo argues “personal and/or domestic robots will raise a number of psychological issues concerning feelings of subordination, attachment, trustworthiness, etc.” (Pagallo, 2013, p. 502). Similarly, Holder et al. (2016) found that people have become more sceptical of robots as the technology advanced and capabilities increased. Hence, trust in human robot interactions has to deal with the legacy that it is normally formed between humans, but as humans and robots co-exist, metrics for trust need to adapt as “the change in a user’s perception of a robot from simply being a technology to being a social actor.” (Moran et al., 2015, p. 2).
Trust and Robot Form: One basis for trust is the form of the robot, ranging from non-humanoid (e.g. Roomba) to humanoid (e.g. Aeolus) or ethereal interactive AI (e.g. Alexa). This is highlighted above. Some robots may utilise more human attributes in their relationships with users “which can help increase the perceptions of anthropomorphism, including facial features, physical expressiveness, emotions and personality.” (Moran et al., 2015, p. 1). Similarly, affective robots have abilities to “[recognize] and [synthesize] emotional cues and response but are still largely incapable of emotional reasoning” (Sullins, 2012, p. 399). However, given the possible emotional connection between human and robot, human psychology can be exploited and user behaviour manipulated (Darling, 2016). Hence, there legal and design-based protections for vulnerable users who could be adversely influenced are necessary.
Law and Trust: The law may be able to address public reservations about trusting robots, such as by creating frame-works for ensuring they are safe and recognise privacy of users. The law can support trust in robots by ensuring they are safe and respect privacy. With the legal approach, it could help set an equal playing field in the market while regulating and protecting consumers by supporting “trust in brands, trust in functions, trust in privacy, trust in a fair market.” (Holder et al., 2016, p. 384) We explore the legal frameworks around safety and data privacy below.
Safety – Currently, there is a lack of coherent legislation governing service robot safety. For example, Directive 93/42/EEC concerning medical devices (as amended by Directive 2007/47/EC) (“Medical Device Directive”) and Directive 90/385/EEC on active implantable medical devices (“AIMDD”) only apply to care robots in dealing with medicine but not care robots with other functions. Standards, such as ISO 13482, plug this gap. As care robots inherently deal with vulnerable populations, appropriate regulation (Holder et al., 2016), is necessary, especially given the multitude of contexts domestic service robots may live in. Accordingly, whilst design can address some challenges, ensuring legal frameworks that do exist are applicable is vital to protecting user interests.
Privacy and Data Protection – As Finding 3 states, privacy is a big concern. With domestic robots, privacy risks are amplified as they are within the intimate setting of the home, collecting sensitive data from users longitudinally, and profiling their behaviour over time to provide contextually appropriate services. New European Data Protection frameworks, such as the General Data Protection Regulation 2016 (GDPR) and proposed ePrivacy Regulation, provide compliance requirements. This includes problematic requirements such as around data portability [Article 5(2) GDPR; (Urquhart et al., 2017)], accountability [Article 20 GDPR; (Urquhart et al., 2018)] and the right to be forgotten (Article 22 GDPR). As in many areas of IT regulation, the fast pace of technological change and slow legal landscape means there is an increasing turn to design as a regulatory tool (Lessig, 2006; Urquhart, 2017). Law and policy concepts like privacy by design and default (PbD – Article 25 GDPR) and security by design (Article 32 GDPR) provide the mandate for ensuring personal data driven technologies embed safeguards from the beginning, not just after a harm occurs. Supporting how best roboticists can do PbD in practice requires extra thought, as it does for other developers (Luger et al., 2015; Hadar et al., 2018). As Mataric recognises “Privacy has to be taken seriously when the robot is designed, not after the fact, as a result of users’ complaints” (Mataric, 2007, pp. 285-286). Navigating, the interface between HRI practitioners and researchers and law will be critical, as it is already for HCI and law (Urquhart and Rodden, 2017).
Transparency and Control: Linked to data protection, is questions of transparency and control. The degree of agency a robot has is a big concern as this impacts the degree of uncertainty and ability to control its actions. Oversight of autonomous decisions, and how these are made accountable to users is as much a design issue as it is a legal one (Edwards and Veale, 2017). It is predicted that eventually robots will achieve the level of autonomy where “they themselves become the data controller and responsible for compliance with data privacy legislation” (Holder et al., 2016, p. 395), a prediction also supported by Pagallo (2013). However, for now, focus should be on establishing and operationalising the responsibility of roboticists to their users, and in particular, protecting their legal rights. Translation between legal frameworks and design guidelines is important for this (Urquhart, 2014).
The growth of smart homes is paving the way for domestic robots. There are a multitude of existing challenges around robotics that need to be dealt with. Findings from the pilot survey were numerous but highlight the relationships between robots and users with form, privacy and trust. Given the current pitfalls being experienced with emergent smart homes, there is a responsibility on roboticists to learn from these mistakes and design such robots in legally, socially and ethically responsible ways. A key dimension of this is the need to design technologies after engaging with, understanding and respecting needs of users. Whilst there are commitments to many high-level ethical principles emerging in new codes of conduct for roboticists, these need to be situated and operationalised. The current focus in HCI on values in design is one approach to doing this. Similarly, the turn in law to design for regulation means there is similar drive to consider end user interests and rights within the design process. If the roboticists creating domestic robots ensure they engage with end user interests, there is a chance they can emerge in a more responsible manner.
Example domestic robots
|Sector||Example and explanation||URL|
|Domestic Chores||A. Vacuum Cleaner − iRobot Roomba − robot vacuum that uses intelligent sensors to move through home, adapting to surroundings and cleaning floors.
B. Mopping − iRobot Braava − Similar to above but offering wet mopping, damp sweeping or dry sweeping.
C. Window Cleaning − Ecovacs Winbot − Cordless vertical window cleaning for the majority of window types.
D. Home Butler −Aeolus − Humanoid home assistant that can learn and develop in situ and perform basic cleaning and tidying tasks, integrate with personal assistance technology and smart devices
|Gardening||A. Lawnmower − Robomow − Automatic sensor lawn mower||www.robomow.com|
|Pets and Care||A. Litter Tray − Litterbot − Automatic self−cleaning litter box.
B. Pets − Sony Aibo dog resurrected in 2018 after original launch in 1999; more threatening Boston Dynamics dog SpotMini also emerging too
|Food and Drink||A. Moley Robotics − robot kitchens (humanoid arms behind glass) that can cook different selected recipes.
B. Miso Robotics Kitchen Assistants − burger flipping robots that detect when reach desired temperature.
C. Starship − food delivery robots that were tested in San Francisco, but pedestrians don’t like them hugely.
D. Robot Barista − Automated processing of drinks orders in a known environment trialled in Japan
|Care and Companionship||A. Care robots − ‘Stevie the Robot’ is a research project bot that can perform autonomously or be remotely operated by human operators e.g. for tasks such as reminding elderly to eat (and taking photos as evidence) or reminding to clean up after themselves (detecting when plates are dirty).
B. Elli−Q − Keeping older adults active and engaged, the robot suggests activities and can make phone calls for the elderly users.
C. Paro Robot seal − Animal therapy has proven benefits for the elderly and this robot allows such therapy to be administered in places where a live animal may be inappropriate.
D. Pepper Robot − able to assess and react to perceived emotional state of individual it interacts with
|Home Management e.g. Security, Energy||A. UB Tech Walker − can perform basic roaming security duties and other tasks such as calendar and email management.
B. Robotex Avatar III Security Robot − fully functioning all terrain stair climbing security robot linked via wi−fi for continual monitoring.
C. Appbot Riley − Similar to above, Riley uses Wi−Fi connectivity to stream live video and audio
|Existing Interactive AI e.g. Home Personal Assistant Devices||A. Amazon Echo − Smart speakers developed by Amazon. Uses voice−controlled intelligent personal assistant, Alexa and is capable of playing audiobooks, music playback, setting alarms, making to−do lists, streaming podcasts, and providing real−time information such as weather and rail times. It can also control several other smart home devices such as heating and lighting.
B. Google Home − Similar to Alexa, Google Home speakers enable users to speak voice commands to interact with services through Google's intelligent personal assistant called Google Assistant. Offering the same breadth of services as above.
C. HomePod − Again, this smart speaker developed by Apple Inc. uses Apple’s own smart assistant, Siri, to control the speaker and other HomeKit devices. This can connect to all Apple based product such as iPhones and through to services such as iTunes.
D. Invoke − A further iteration of the smart speaker, this time utilizing Microsoft's intelligent personal assistant, Cortana. Effectively providing all of the same services but to a Microsoft user
|Toy Robots||A. Luka the Owl - reads bedtime stories to children.
B - Pleo Robot Dinosaur - Pleo uses a basic AI to grow and develop from a baby to an adult dinosaur taking cues from the user to develop a unique personality (within a set of predefined algorithms).
C.- CHiP the smart dog - Artificial intelligence robot dog with adaptive personality. Will respond to App and wearables
As is the nature of surveys, participants sometimes omit to answer questions. We have accounted for this in any descriptive statistics presented. Any results based on less than the full 43 participants are labelled as such, and the result is recalculated to reflect this. Again, this is a small scale, explorative pilot study, but provides useful insights we present here on their own, limited terms.
In addition to the statistics, we also present qualitative quotes or other feedback from our respondents. This provides further context and is sourced from free form boxes in the survey where they could further explain their answers.
Appendix 2. Key survey results
Aarts, E. and Marzano, S. (2003), The New Everyday: Views on Ambient Intelligence, 010 Publishers, Rotterdam.
ACM (1992), “Code of ethics and professional conduct”, available at: https://ethics.acm.org/code-of-ethics/previous-versions/1992-acm-code/
Bartneck, C. and Forlizzi, J. (2004), “A design-centered framework for social human-robot interaction”, International Workshop on Robot and Human Interactive Communication, pp. 591-594.
Bell, G. and Dourish, P. (2006), “Yesterday’s tomorrow’s: notes on ubiquitous computing’s dominant vision”, Personal and Ubiquitous Computing, Vol. 11 No. 2, pp. 133-143.
Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B. and Winfield, A. (2017), “Principles of robotics: regulating robots in the real world”, Connection Science, Vol. 29 No. 2, pp. 124-129.
Bødker, S. (2015), “Third-wave HCI, 10 years later-participation and sharing”, interactions, Vol. 22 No. 5, pp. 24-31, doi: 10.1145/2804405.
Brown, I. (2015), GSR Discussion Paper – Regulation of the Internet of Things, International Telecommunications Union, Geneva.
Brush, A.J., Lee, B., Mahajan, R., Agarwal, S., Saroiu, S. and Dixon, C. (2011), ““Home automation in the wild”, Proceedings of the 2011 annual conference on Human factors in computing systems – CHI’ 11, ACM Press, New York, NY, p. 2115, doi: 10.1145/1978942.1979249.
Bryson, J. and Winfield, A. (2017), “Standardizing ethical design for artificial intelligence and autonomous systems”, Computer, Vol. 50 No. 5, pp. 116-119, doi: 10.1109/MC.2017.154.
Calo, R. (2010), “Robots and privacy”, in, Lin, P., Bekey, G. and Abney, K. (Eds), Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, Cambridge.
CES (2018), “LG robot Cloi repeatedly fails on stage at its unveil – BBC news”, available at: www.bbc.co.uk/news/technology-42614281 (accessed 23 May 2018).
Cisco (2013), The Internet of Everything, San Jose, CA, CISCO, available at: www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoE_Economy_FAQ.pdf (accessed 24 May 2018).
Clark Thompson, A. (2017), “Bosch made a countertop robot with recipe smarts”, available at: www.cnet.com/products/mykie/preview/ (accessed 24 May 2018).
Coeckelbergh, M. (2012), “Can we trust robots?”, Ethics and Information Technology, Vol. 14 No. 1, pp. 53-60, doi: 10.1007/s10676-011-9279-1.
Coskun, A., Kaner, G. and Bostan, I. (2018), “Is smart home a necessity or a fantasy for the mainstream user? A study on users’ expectations of smart household appliances”, International Journal of Design, Vol. 12 No. 1.
Crabtree, A. and Rodden, T. (2004), “Domestic routines and design for the home”, Computer Supported Cooperative Work: CSCW: An International Journal, Vol. 13 No. 2, pp. 191-220, doi: 10.1023/B:COSU.0000045712.26840.a4.
Crabtree, A., Tolmie, P. and Rouncefield, M. (2012), Doing Design Ethnography, Springer Verlag, London.
Le Dantec, C., Poole, E.A. and Wyche, S. (2009), “Values as lived experience: evolving value sensitive design in support of value discovery”, Proceedings of the SIGCHI conference on human factors in computing systems, ACM, New York, NY, pp. 1141-1150.
Darling, K. (2016), “Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behaviour towards robotic objects”, We Robot Conference 2012, Vol. 3, pp. 60-74, doi: 10.2139/ssrn.2044797.
Dautenhahn, K. (2018), “Human robot interaction”, in The Encyclopedia of Human-Computer Interaction, 2nd ed., Interaction Design Foundation.
Delcker, J. (2018), “Europe divided over robot ‘personhood’ – POLITICO”, available at: www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/ (accessed 24 May 2018).
Edwards, L. and Veale, M. (2017), “Slave to the algorithm? Why a right to explanationn is probably not the remedy you are looking for”, SSRN Electronic Journal, doi: 10.2139/ssrn.2972855.
EU (2015), Eurobarometer 427 on Automomous Systems, EU, Brussels.
EU (2017), “Special Eurobarometer 460: attitudes towards the impact of digitisation and automation on daily life – datasets”, available at: https://data.europa.eu/euodp/data/dataset/S2160_87_1_460_ENG (accessed 22 November 2018).
European Parliament (2017), Civil Law Rules on Robotics, European Parliament, Brussels.
Flanagan, M., Howe, D. and Nissenbaum, H. (2008), “Embodying values in technology: theory and practice”, in Van Den Hoven, J. and Weckert, J. (Eds), Information Technology and Moral Philosophy, Cambridge University Press, Cambridge.
Friedman, B., Hendry, D.G. and Borning, A. (2017), “A survey of value sensitive design methods”, Foundations and Trends® in Human–Computer Interaction, Vol. 11 No. 2, pp. 63-125, doi: 10.1561/1100000015.
Friedman, B., Kahn, P.H. and Borning, A. (2008), “Value sensitive design and information systems”, in Himma, K. and Tavani, H. (Eds), The Handbook of Information and Computer Ethics, Wiley and Sons, New York, NY.
Gebhard, A. (2018), “Ubtech Lynx review: Alexa in a yogi bot is surprisingly boring”, available at: www.cnet.com/products/ubtech-robotics-lynx/review/ (accessed 24 May 2018).
Gibbs, S. (2017), “The future of funerals? Robot priest launched to undercut human-led rites”, available at: www.theguardian.com/technology/2017/aug/23/robot-funerals-priest-launched-softbank-humanoid-robot-pepper-live-streaming (accessed 24 May 2018).
Goodrich, M.A. and Schultz, A.C. (2007), “Human-robot interaction: a survey”, Foundations and Trends® in Human-Computer Interaction, Vol. 1 No. 3, pp. 203-275, doi: 10.1561/1100000005.
Goulden, M., Tolmie, P., Mortier, R., Lodge, T., Pietilainen, A.-K. and Teixeira, R. (2018), “Living with interpersonal data: observability and accountability in the age of pervasive ICT”, New Media & Society, Vol. 20 No. 4, pp. 1580-1599, available at: https://doi.org/10.1177/1461444817700154
Grimpe, B., Hartswood, M. and Jirotka, M. (2014), “Towards a closer dialogue between policy and practice: responsible design in HCI”, Proceedings of ACM SIGCHI Conference on Human Factors in Computer Systems, (CHI '14.), ACM Press, New York, NY.
Gurman, M. and Stone, B. (2018), “Amazon has a top-secret plan to build home robots”, available at: www.bloomberg.com/news/articles/2018-04-23/amazon-is-said-to-be-working-on-another-big-bet-home-robots (accessed 24 May 2018).
Hadar, I., Hasson, T., Ayalon, O., Toch, E., Birnhack, M., Sherman, S. and Balissa, A. (2018), “Privacy by designers: software developers’ privacy mindset”, Empirical Software Engineering, Vol. 23 No. 1, pp. 259-289, doi: 10.1007/s10664-017-9517-1.
Haggerty, K. and Ericson, R. (2000), “The surveillant assemblage”, The British Journal of Sociology, Vol. 51 No. 4, pp. 605-622, doi: 10.1080/00071310020015280.
Haidegger, T., Barreto, M., Gonçalves, P., Habib, M.K., Ragavan, S.K.V., Li, H., Vaccarella, A., Perrone, R. and Prestesi, E. (2013), “Applied ontologies and standards for service robots”, Robotics and Autonomous Systems, Vol. 61 No. 11, pp. 1215-1223, doi: 10.1016/j.robot.2013.05.008.
Hern, A. (2017), “Kids should not be guinea pigs’: mattel pulls AI babysitter”, available at: www.theguardian.com/technology/2017/oct/06/mattel-aristotle-ai-babysitter-children-campaign (accessed 24 May 2018).
Higbie, T. (2013), “Why do robots rebel? The labor history of a cultural icon”, Labor Studies in Working-Class History of the Americas, Vol. 10 No. 1, pp. 99-121, doi: 10.1215/15476715-1899057.
Holder, C., Khurana, V., Harrison, F. and Jacobs, L. (2016), “Robotics and law: key legal and regulatory implications of the robotics age (part I of II)”, Computer Law and Security Review, Vol. 32 No. 3, pp. 383-402, doi: 10.1016/J.CLSR.2016.03.001.
Honda (2018), “Honda 3E robot series press release CES 2018”, available at: http://world.honda.com/CES/2018/ (accessed 24 May 2018).
IEEE (1963), “Code of ethics”, available at: www.ieee.org/about/corporate/governance/p7-8.html
International Federation of Robotics (2017), “Executive summary – world robotics (service robots) 2017”, World Robotic Report – Executive Summary, International Federation of Robotics.
Jones, R. (2017), “Roomba’s next big step is selling maps of your home to the highest bidder”, available at: https://gizmodo.com/roombas-next-big-step-is-selling-maps-of-your-home-to-t-1797187829 (accessed 27 July 2018).
Kelion, L. (2018), “CES 2018: LG robot cloi repeatedly fails on stage at its unveil”, BBC Technology, available at: www.bbc.co.uk/news/technology-42614281
Kuri (2018), “Kuri: your companion, assistant, photographer and so much more”, available at: www.heykuri.com/living-with-a-personal-robot (accessed 24 May 2018).
Lee, S.-E., Choi, M. and Kim, S. (2017), “How and what to study about IoT: research trends and future directions from the perspective of social science”, Telecommunications Policy, Vol. 41 No. 10, pp. 1056-1067, doi: 10.1016/J.TELPOL.2017.09.007.
Leenes, R. and Lucivero, F. (2014), “Laws on robots, laws by robots, laws in robots: regulating robot behaviour by design”, Law, Innovation and Technology, Vol. 6 No. 2, pp. 193-220, doi: 10.5235/175799188.8.131.52.
Leppënen, S. and Jokinen, M. (2003), “‘Daily routines and means of communication in a smart home’”, Inside the Smart Home, Springer-Verlag, London, pp. 207-225, doi: 10.1007/1-85233-854-7_11.
Lessig, L. (2006), Code: Version 2.0, Basic Books, New York, NY.
Luger, E., Urquhart, L., Rodden, T. and Golembewski, M. (2015), “Playing the legal card: using ideation cards to raise data protection issues within the design process”, Proceedings of the ACM CHI’15 Conference on Human Factors in Computing Systems, pp. 457-466, doi: 10.1145/2702123.2702142.
Lyon, D. (2003), Surveillance as Social Sorting: privacy, Risk, and Digital Discrimination, Routledge, New York, NY.
Mäkinen, L.A. (2016), “Surveillance on/off: examining home surveillance systems from the user’s perspective”, Surveillance and Society, Vol. 14 No. 1, pp. 59-77.
Mataric, M.J. (2007), The Robotics Primer, MIT Press, Cambridge, MA.
Millar, J. (2008), “Blind visionaries: a case for broadening engineers’ ethical duties”, 2008 IEEE International Symposium on Technology and Society, IEEE, pp. 1-4, doi: 10.1109/ISTAS.2008.4559780.
Moran, S., Bachour, K. and Nishida, T. (2015), “User perceptions of anthropomorphic robots as monitoring devices”, AI and Society, Vol. 30 No. 1, pp. 1-21, doi: 10.1007/s00146-013-0515-6.
Nissenbaum, H. (2005), “Values in technical design”, in Mitcham, C. (Ed.) Encyclopaedia of Science, Technology and Ethics, MacMillan, New York, NY.
Nissenbaum, H. (2009), Privacy in Context: Technology Policy and the Integrity of Social Life: Stanford Law Books, Stanford University Press, Stanford, doi: 10.1207/S15327051HCI16234_03.
OECD (2013), “Building blocks for smart networks”, OECD Digital Economy Papers, No. 215, OECD Publishing, Paris, available at: https://doi.org/10.1787/5k4dkhvnzv35-en.
Osborne, C. (2016), “Shodan: the IoT search engine for watching sleeping kids and bedroom antics | ZDNet”, available at: www.zdnet.com/article/shodan-the-iot-search-engine-which-shows-us-sleeping-kids-and-how-we-throw-away-our-privacy/ (accessed 24 May 2018).
Panetta, K. (2017), “Top trends in the gartner hype cycle for emerging technologies, 2017 – smarter with gartner”, available at: www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/ (accessed 3 October 2018).
Pagallo, U. (2013), “Robots in the cloud with privacy: a new threat to data protection?”, Computer Law and Security Review, Vol. 29, pp. 501-508, doi: 10.1016/j.clsr.2013.07.012.
Panasonic (2017), “Panasonic demonstrates desktop ‘companion’ robot at CES 2017”, available at: https://news.panasonic.com/global/stories/2017/45856.html (accessed 24 May 2018).
Reeves, S. (2012), “Envisioning ubiquitous computing”, Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems – CHI’ 12, ACM Press, New York, NY, p. 1573, doi: 10.1145/2207676.2208278.
Rodden, T. and Benford, S. (2003), “'The evolution of buildings and implications for the design of ubiquitous domestic environments”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 9-16.
Schafer, B. (2016), “Closing Pandora’s box? The EU proposal on the regulation of robots”, Pandora’s Box, Vol. 2016, pp. 55-68.
Schafer, B. and Edwards, L. (2017), “‘I spy, with my little sensor’: fair data handling practices for robots between privacy, copyright and security”, Connection Science, Vol. 29 No. 3, pp. 200-209, doi: 10.1080/09540091.2017.1318356.
Sellen, A., Rogers, Y., Harper, R. and Rodden, T. (2009), “Reflecting human values in the digital age”, Communications of the ACM, Vol. 52 No. 3, pp. 58-66.
Sharkey, A. and Sharkey, N. (2012), “Granny and the robots: ethical issues in robot care for the elderly”, Ethics and Information Technology, Vol. 14 No. 1, pp. 27-40, doi: 10.1007/s10676-010-9234-6.
Shneiderman, B. (1990), “‘Human values and the future of technology: a declaration of empowerment”, 90 Proceedings of the conference on Computers and the Quality of Life, pp. 1-6.
Steen, M. (2011), “Tensions in human-centred design”, CoDesign, Vol. 7 No. 1, pp. 45-60, doi: 10.1080/15710882.2011.563314.
Sullins, J.P. (2012), “Robots, love, and sex: the ethics of building a love machine”, IEEE Transactions on Affective Computing, Vol. 3 No. 4, pp. 398-409, doi: 10.1109/T-AFFC.2012.31.
Tolmie, P. and Crabtree, A. (2018), “The practical politics of sharing personal data”, Personal and Ubiquitous Computing, Vol. 22 No. 2, pp. 293-315.
Tolmie, P., Pycock, J., Diggins, T., MacLean, A. and Karsenty, A. (2002), “Unremarkable computing”, Proceedings of the SIGCHI conference on Human factors in computing systems Changing our world, changing ourselves – CHI’ 02, ACM Press, New York, NY, p. 399, doi: 10.1145/503376.503448.
Urquhart, L. (2014), “Bridging the gap between law and HCI: Designing effective regulation of human autonomy in everyday ubicomp systems”, UbiComp 2014 – Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, doi: 10.1145/2638728.2638844.
Urquhart, L. (2017), “Ethical dimensions of user centric regulation”, ORBIT Journal, Vol. 1 No. 1, p. 17, doi: 10.29297/orbit.v1i1.14.
Urquhart, L., Lodge, T. and Crabtree, A. (2018), “Demonstrably doing accountability in the internet of things”, International Journal of Law and Information Technology, Vol. 27 No. 1, pp. 1-27.
Urquhart, L. and Rodden, T. (2017), “New directions in information technology law: learning from human-computer interaction”, International Review of Law, Computers and Technology, Vol. 31 No. 2, pp. 150-169, doi: 10.1080/13600869.2017.1298501.
Urquhart, L., Sailaja, N. and McAuley, D. (2017), “Realising the right to data portability for the domestic internet of things”, Personal and Ubiquitous Computing, pp. 1-16, doi: 10.1007/s00779-017-1069-2.
Van den Hoven, J. (2006), “‘ICT and value sensitive design’”, in Goujon, P., Lavelle, S., Duquenoy, P., Kimppa, K. and Laurent, V. (Eds) The Information Society: Innovations, Legitimacy, Ethics and Democracy, IFIP International Federation for Information Processing, Springer, Dordrecht.
Verbeek, P.-P. (2006), “Materializing morality: design ethics and technological mediation”, Science, Technology and Human Values, Vol. 31 No. 3, pp. 361-380, doi: 10.1177/0162243905285847.
Von Schomberg, R. (2011), “Prospects for technology assessment in a framework of responsible research and innovation”, in Dusseldorp, M. and Beecroft, R. (Eds), Technikfolgen Abchätzen Lehren: Bildungspotenziale Transdisziplanärer, Vs Verlag Methoden, Wiesbaden.
Wagner, A.R. (2009), The Role of Trust and Relationships in Human-Robot Social Interaction, GA Institute of Technology, Atlanta, GA, p. 283.
Weiser, M. (1993), “Some computer science issues in ubiquitous computing”, Communications of the ACM, Vol. 36 No. 7, pp. 75-84, doi: 10.1145/159544.159617.
Wetmore, J. (2018), “What can We learn about vacuum cleaners from vampires”, IEEE Consumer Electronics Magazine, Vol. 7 No. 2, pp. 103-105.
Williams, M., Nurse, J.R.C. and Creese, S. (2017), ‘“Privacy is the boring bit’: user perceptions and behaviour in the internet-of-things”, 15th Annual Conference on Privacy, Security and Trust (PST), Calgary, Alberta, pp. 181-189.
Wilson, C. (2015), “Smart homes and their users: analysis and key challenges”, Personal and Ubiquitous Computing, Vol. 19 No. 2, pp. 463-476.
Winfield, A. (2017), “A round up of robotics and AI ethics, Alan Winfield’s web log”, available at: http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-ethics.html (accessed 24 May 2018).
van Wynsberghe, A. (2013a), “A method for integrating ethics into the design of robots”, Industrial Robot: An International Journal, Vol. 40 No. 5, pp. 433-440.
van Wynsberghe, A. (2013b), “Designing robots for care: care centered value-sensitive design”, Science and Engineering Ethics, Vol. 19 No. 2, pp. 407-433.
Bartlett, J. (2018), “Will 2018 be the year of the neo-luddite?”, available at: www.theguardian.com/technology/2018/mar/04/will-2018-be-the-year-of-the-neo-luddite (accessed 25 May 2018).
Urquhart, L., Goulden, M., Flintham, M. and Price, D. (2019), “Domesticating data: socio-legal perspectives on smart homes & good data design”, in Daly, A., Devitt, S.K. and Mann, M. (Eds), Good Data, doi: 20.500.11820/21892048-c023-403a-8025-62c15d73a48d
The authors would like to acknowledge the contribution of all project participants and all project activities to the ideas that underpin this paper. The paper was presented at ETHICOMP 2018, and the authors thank participants for their comments.
Funding: The research benefitted from the activities undertaken in: the “Moral-IT: Enabling Design of Ethical and Legal IT Systems” project as part of the Horizon Digital Economy Research Institute (EPSRC Grant EP/M02315X/1); RCUK Horizon Centre for Doctoral Training (EPSRC Grant EP/G037574/1).
Copyright: Copyright remains with the authors. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.