Robotopias: mapping utopian perspectives on new industrial technology

Purpose – This paper maps utopian theories of technological change. The focus is on debates surrounding emerging industrial technologies which contribute to making the relationship between humans and machines more symbiotic and entangled, such as robotics, automation and artificial intelligence. The aim is to provide a map to navigate complex debates on the potential for technology to be used for emancipatory purposes and to plot the grounds for tactical engagements. Design/methodology/approach –The paper proposes a two-way axis tomap theories into to a six-category typology. Axis one contains the parameters humanist–assemblage. Humanists draw on the idea of a human essence of creative labour-power, and treat machines as alienated and exploitative form of this essence. Assemblage theorists draw on posthumanism and poststructuralism, maintaining that humans always exist within assemblages which also contain non-human forces. Axis two contains the parameters utopian/optimist; tactical/processual; and dystopian/pessimist, depending on the construed potential for using new technologies for empowering ends. Findings – The growing social role of robots portends unknown, and maybe radical, changes, but there is no single human perspective from which this shift is conceived. Approaches cluster in six distinct sets, each with different paradigmatic assumptions. Practical implications – Mapping the categories is useful pedagogically, and makes other political interventions possible, for example interventions between groups and social movements whose practice-based ontologies differ vastly. Originality/value –Bringing different approaches into contact andmapping differences inwayswhichmake themmore comparable, can help to identify the points of disagreement and the empirical or axiomatic grounds for these. It might facilitate the future identification of criteria to choose among the approaches.


Introduction
Capitalism is punctuated by crises, most recently the 2008 financial crisis. Modernisation discourses and practices portending historical recomposition of Capital in labour saving/ enhancing technologies, and environmental techno-capitalism abound in scientific and policy Utopian perspectives on robots literatures and mainstream culture, under terminology of the "fourth industrial revolution" (Schwab, 2017). The technical emphasis in this "new" paradigm appears to be on automation technologies, for example robots encompassing cognitive functions (e.g. natural language processing), artificial intelligence and machine learning and algorithms mimicking management functions (Wang et al., 2017) and even social and emotional labour (Breazeal, 2002;Kerruish, 2016). However, some aspects emphasize communication and digital connectivity between humans, technology, and nature through big data, wearable devices, "smart cities," and "smart factories," and "the Internet of things", and human augmentation through nano-and bio-technology (Zanella et al., 2014). There is apparent conflict between desires to replicate/replace human labour through technology, and desires rendering the relationship between humans, machines and nature more symbiotic through hybrid connections (Romero et al., 2016). Academic social theories address shifts in the sociotechnical. They mobilise empirical (based on observation/analysis of reality) and axiomatic (based on normative assumptions about the nature of reality) claims. We contend that divides in these literatures are often cast as hegemonic ontologies or two-way splits, binarizing difference into opposition. For example, Marxists have accused critical posthumanists, who seek to empirically understand entangled human-technology relationships, of normative complicity in reproducing their specific forms under capitalism (Rikowski, 2003, pp. 121-123), whilst posthumanists have bundled Marxists together with (Neo) Liberals proclaiming their mutual complicity in the modernist project of patriarchal essentialism and human exceptionalism, despite incompatible views of what constitutes the human (Braidotti, 2013, p. 19). This divisiveness reflects polarisation in mainstream culture and radical social movements and oversimplifies discussions of accelerating capitalist sociotechnical change and ecological destruction (Dale, 2019).
This article crafts a six-cluster typology of perspectives on shifting human-technology relationships along two axes. Along the first axis, we complexify the debate by attention to utopianism, understood as encompassing investments and articulations of affect and desire in the present and/or future intentionality (Garforth, 2009). Optimists invest new technologies with utopian, miraculous or revolutionary potential, with dangers or costs seen as manageable. Pessimists invest them with fears of a dystopian future, portraying a trend towards greater control, alienation, ecocide, and other unwanted outcomes. Between these are strategic or tactical authors, who emphasise socioeconomic systems or assemblages within which technologies are deployed, rendering them wonderful or harmful. The second axis divides humanist and assemblage theories, distinguished by the ontological primacy attached to humans, or else the assemblages or relations within which actors are situated. We take "humanist" to encompass a variety of positions, from belief in an essential human nature or special human creative power, to teleological ideas of a human calling. Assemblage theories are associated with post-humanists, transhumanists and others who critique human essence and view "technology as a trait of the human outfit" (Ferrando, 2013, p. 28). Humans are embedded in wider assemblages containing nonhuman components (e.g. machines), and ideas of "Man," "the human" or "the individual" are results of contingent assemblages. Assemblage theorists may still judge assemblages in normative and imaginary terms, in relation to the affects or social effects they produce.
The clusters do not express unified political positions, nor will we critique them from a singular position, aside from noting our anti-authoritarianism. All clusters can produce authoritarian theories, but some are more prone to. Nor do we associate the clusters with particular historical periods, though they tend to be temporally clustered. Political and historical mappings are important (see Ferrando, 2013Ferrando, , 2014 but beyond our scope. Our purpose is closer to Deleuze's "problem-field" (Deleuze, 1994). Each cluster is an abstract machine, with concepts as components, arranging percepts and sense in particular ways, making some things visible and thinkable, and obscuring others. When a problem field establishes axioms, it tends to ignore or anathematise positions which deny the axiom (Deleuze, 1994, pp. 108-112, p. 268), but many of these are in principle empirical-type claims and are treated empirically in other clusters. Particular problems arise from a cluster which are not solved within its concepts, which lead to lines of interest, or what the theory is trying to do.
It is possible (even desirable) to have attachments to more than one problem-field. Mapping the categories is thus useful pedagogically, and makes other political interventions possible, for example the "insurgent training" that Nold associates with "ontological interventions" (Nold, 2020) between groups whose practice-based ontologies differ vastly. Policy-oriented research is dominated by humanist-optimists, whereas Science and Technology Studies (STS) is dominated by assemblage-optimists and assemblagetacticians. Humanist-strategists are strong in social sciences, while humanist-pessimist and assemblage-tactical approaches are common in technology-related activism. These different approaches often ignore or speak past one another, leading to a lack of interperspectival learning. Bringing different approaches into contact and mapping their differences can identify disagreements and (empirical or axiomatic) grounds for these (see Figure 1).

Humanist-optimist
Humanist-optimists invest emerging technologies with hopes for desired utopian futures. Most are enthusiastic about existing economic institutions, conflating these with technological progress. Writings are replete with metaphysical talk of "miraculous" changes (Weise et al., 2018), humans obtaining god-like powers, or the solution of issues like immortality (Kelly, 1994). Historically, they often emerge during accelerated technological innovation, believing currently fashionable sciences (e.g. cybernetics) index foundational levels of life and matter, as in Dennett's (2004) view that humans, animals and robots are basically similar. Since living creatures are machine-like, there are few ethical or practical barriers to AI, artificial life, biological manipulation, or Human-Robot Interaction (HRI). Neither mechanising humans nor anthropomorphising machines is necessarily fallacious.

Utopian perspectives on robots
There is a long tradition of imagining machines as sources of unlimited wealth and/or ways around the messy relationship between capital and labour (Wendling, 2009, pp. 68-69;Dyer-Witheford, 1999, p. 3), ranging from Babbage in the nineteenth century, through Bell, Brzezinski, Drucker, and Wiener in the postwar era, to the "Californian Ideology" (Barbrook and Cameron, 1995) in the 1980-1990s. Opposition to technological change is dismissed as resistance to progress. Fundamental economic shifts make knowledge, innovation and information main sources of wealth (Dyer-Witheford, 1999, pp. 23-26). Humanist-optimists downplay risks of unemployment and machinic enslavement, suggesting work will become more creative, cognitive and autonomous (Reich, 2000;Buterin, 2013;Licklider, 1990;Beer, 1959). Many humanist-optimists use an actor-tool model whereby technologies are basically neutral: negative consequences stem from human misuse. This is compatible with concerns about "technical developments with great possibilities for good and evil" (Wiener, 1948, p. 28). There is often substantial faith in unknown futures or invocations for leaders to step up to realise their utopian aspects.
Within humanist-optimism, there is a division between the speculative utopias of transhumanism and more mundane, problem-solving research of "policy relevant" humanistoptimists. Much (para-) academic research falls into the latter subset (for critical discussion see Plows and Reinsborough, 2011;Gupta et al., 2019). Robots, AIs and HRI provide human benefits including efficiency, reduced drudgery, and even inclusivity and sustainability. Dangers are largely technical, subject to techno-or edu-fixes within a neoliberal framework. So hostile AI is an issue of avoiding programming errors and human malice (Sotala and Yampolskiy, 2015), and for Kurzweil (2005, p. 420), capitalist markets provide optimal conditions for friendly AI.
At the more utopian end, transhumanists and extropians promote "the belief that we can, and should. . .overcome our biological limits by means of reason, science and technology", augmenting humans using new technologies (Ouroboros, 1999, p. 4). Transhumanists also "favour reason, progress, and values centered on [human] well being", however, they see humanity as a "transitory stage" towards a "transhuman or posthuman condition" (More, 1994, p. 1). One branch focuses on the "Singularity": a point at which AI surpasses human intelligence. While this is recognised as posing existential dangers to humans, it is a means by which humanity transcends itself, or fulfils humanity's destiny on an evolutionary ladder. Posthumanists generally characterise transhumanism as liberal humanist, hubristic, and dualistic, and "classist and technocentric" (Ferrando, 2013, p. 28;c.f. Graham, 2004).

Humanist-strategic
Humanist-strategists maintain that robots and machines can harm or benefit humans, depending on the socioeconomic assemblages they are embedded in. Most are Marxists or other leftists, and socioeconomic rather than technological determinists. Many embrace assemblage theories like situatedness and human-machine interchange. However, human labour retains a special place as the source of creativity, progress or value. There is no fixed human nature, but humanity has an autopoietic power of labour/creation (Wendling, 2009, p. 140). Humanist-strategists seek to use (alienated, but not ontologically autonomous) machines for human-directed goals. They are congealed human labour or knowledge, employed as "fixed capital" owned by capitalists, only seeming like an autonomous force (Marx, 1973(Marx, [1857Marx 1990Marx [1867, p. 508; Wendling, 2009, p. 67).
Negative effects of automation within capitalism include unemployment, subordination of workers to machines, and a range of psychological and physical harms (e.g. Marx 1990Marx [1867, pp. 544-545). Positive potential effects include reduced drudgery, increased social wealth, and satisfaction of human needs. The potential of machines will be redeemed in communism (Marx, 1973(Marx, [1857, p. 706). Marxists generally oppose humanist-pessimist "Luddism" as well as optimists' desocialised technophilia (e.g. Rikowski, 2003, pp. 159-160;Wark, 2004, p. s246). Although humanist-strategists agree that technology is socially mediated and ambivalent in its effects, they disagree which current technologies are reappropriable for strategic or postcapitalist use. Some technologies (e.g. the blockchain, filesharing) are evaluated positively. Technology ownership is also a recurring issue.
Humanist-strategists are relatively optimist or pessimist. Optimistic advocates of accelerationism and Fully Automated Luxury Communism see the possibility of total automation and postcapitalism. Accelerating technological tendencies will destroy capitalism and produce socialism through full automation and a universal basic income (UBI) (Srnicek and Williams, 2015;Mason, 2015). This perspective embraces most emerging technologies, including cyborg augmentations, artificial life, biotechnology, automation, and econometric modelling (Srnicek and Williams, 2015, pp. 82, 144;Reed, 2014, p. 529).
Post-autonomists see more extensive changes in capitalism than other Marxists, e.g. information society as a new type of exploitation of specifically cognitive labour (Dyer-Witherford, 1999, p. 94). New technologies under capitalism are generally harmful, but contain progressive potential, as sources of abundance in a future liberated society, and as means of recomposition and social struggle. For Berardi, "semiocapitalism" reduces individuals to fragments plugged into automatic systems (2016, pp. 214-218). Terranova (2004, pp. 100, 112-115, 118) sees cybernetic systems as systems of soft control (rather than dispersed networks), altering initial conditions and Darwinian selection among outcomes, bringing distributed systems under control. People are fragmented into emotionally reactive units, then managed through aggregate statistical probability (Terranova, 2004, pp. 20, 123). Terranova encourages resistance within and against this field, including refusals and strategic uses (Terranova, 2004, p. 128).
Open Marxists are often more pessimistic, seeing technology as a means through which capital encloses people and enforces their reduction to abstract value. Kleiner (2016, pp. 63-68) theorises a strategic battle between systemic forces of enclosure (which impose work) and practices of escape, which today focuses on intermediation in virtual networks. Some (e.g. Pitts and Dinerstein, 2017) favour DIY initiatives to create a concrete instead of abstract utopia. Cooperativists generally seek a lower-technology socialism focused on direct worker control and production management; some see platforms and the gig economy as an opportunity to expand cooperativism. It "has the potential to be wildly democratizing" (Sipp, 2016, p. 60;c.f. Rushkoff, 2016, p. 33) despite current exploitative tendencies.

Humanist-pessimist
Humanist-pessimists see emerging technologies as threats to vital human values. Most are radical ecologists, with some romantic conservatives, anarchists, and craft socialists also in this cluster. There is a longstanding critique found in Heidegger (1977Heidegger ( [1954), Marcuse (1962Marcuse ( [1941) and Adorno (2002Adorno ( [1977) that because science and technology are purely instrumental, they are corrosive of qualitative, subjective, immanent or expressive meaning. In a technologically alienated world, people become disconnected from others and nature, dependent on technology and social hierarchies, and lose autonomy. Technology is addictive, making users dependent. Psychological welfare and life-goals are threatened, along with ecosystems which matter inherently and as bases for human survival. Humanist-pessimists generally advocate degrowth, human-scale communities, and engagement in meaningful lifeactivity and self-actualisation (Kallis, 2017;Chamberlin, 2009). Technology as such, or its malevolent subclass, is considered part of a general system or "megamachine" ( Mumford, 1986, p. 321). Technologies are not simply tools. They embed strong technological determinism. Either technology and tools are radically differentiated (e.g. Zerzan, 1997;Gorrion, 2012), or technology is bisected. At some point, technology develops inhuman agency and harms humans' autonomy, or else expresses humanity's selfalienation. Zerzan (1997, p. 1) views technology as essentially bad, and as underpinning hierarchies like gender and division of labour. Computers are the latest stage in making people "dependent on the machine for everything" (Zerzan, 2012, p. 92), while AI and robotics will render humans unnecessary (Zerzan, 2012, p. 101). Virtual reality "takes representation to new levels of self-enclosure and self-domestication" (Zerzan, 2008, p. 3). For Perlman (1983, p. 46) civilisation is a force of death, reducing "human beings to things". For Winner, benefits of technology come at the cost of vulnerabilities to high-cost disruptions, and likely unbearable resultant policing demands (Winner, 1986, p. 319). Kingsnorth (2015) distinguishes between addictive technologies which require the entire industrial economy to function, and simple tools which do not.
Referencing robotics, Illich depicts even simple machines as "energy slaves" (Illich, 1973, p. 14) dangerously substituting or supplementing human energy inputs, introducing unequal power (Illich, 1973, p. 26;Illich, 1974). Machines stem from an earlier desire for a "laboratorymade homunculus [that] could do our labor instead of slaves" (Illich, 1973, p. 20). This fails to overcome the master-slave relation (Illich, 1973, p. 20). Humans must then be educated to work alongside homunculi, and thus, subordinated to tools (Illich, 1973, p. 30). Illich's followers provide criteria and typologies for convivial technology (Kostakis et al., 2016;Prieur, 2011;Gordon, 2009), generally focused on avoiding ecological harms, encouraging user autonomy and egalitarian and participatory societies, and providing "meaningful" work. Voinea (2018, p. 76) provides criteria of flexibility, transparency, simplicity/usability, sharedness, creativity and sociality. This raises questions about whether advanced, anthropomorphised technologies (e.g. robotics) can be convivial, with some arguing that robots are likely to become dependency-forming and substitute for human skills in a wider "attack on autonomy" (Anon, 2018, p. 10). Humanist-pessimists rely on many of the same empirical-type claims as humanist-optimists, for example the idea that the "technium" has grown beyond human control. However, the affective investments and normative imperatives are opposed: it is "rearranging itself round us now like a prison" (Kingsnorth, 2015, p. 39), threatening "the abolition of human nature" (Illich, 1973, p. 41). "Human nature" here is axiomatic: Harvey (2015) argues that humans are becoming robotised through media exposure, breaking down complex selves and capacities like concentration and empathy.

Assemblage-optimist
While humanist-optimists value technological development as empowering or augmenting humans, assemblage-optimists value becoming-other and decentring of supposedly immanent binaries in technological/cybernetic assemblages. Assemblage-optimists tend to scorn modernity, within which "humanism", "the human", "reason" and "the subject" are necessarily enmeshed. Oppressions and violence are rooted in presubjective linguistic binaries which elevate humanity over Derridean Others. A Derridean view of technology as a subordinate/supplementary term in a co-constituted binary with the "human" posits technology as the underprivileged term, used to disrupt the privileged "human". In posthumanism, "[t]he cyborg, the monster, the animal... are... emancipated from the category of pejorative difference" (Braidotti, 2011, p. 68).
New materialism is sometimes treated as a posthumanist subtype and sometimes a distinct approach. It places greater emphasis on embodiment (rather than textual constructivism), generally conceiving matter as a process of "materialization" (Ferrando, 2013, p. 30), or becoming within a monistic-holistic field. New materialism encompasses authors like Braidotti; Karen Barad (2003), who argue reality is produced through a necessary splitting of the holistic field by observers; and Jane Bennett (2010), who believes inorganic entities exhibit vital force and should be ethically valued as agents. Like posthumanists, new materialists place strong emphasis on human embeddedness in the world (Barad, 2007, p. 185). Metahumanism effectively rebrands posthumanism, sharing a focus on "an unquantifiable field of relational bodies", technogenesis, critique of humanism and binaries, and inbetweenness as subversion of modernity/capitalism (del Val and Sorgner, 2011).
Posthumanists value emerging technologies for their disruption of "the human". Humans are effects of assemblages: becoming "posthumans" or "cyborgs" when placed in different assemblages(e.g. Haraway, 2015Haraway, [1985; Kaloski, 1997). Resistance to this is taken as a reactionary clinging to order and attempt to keep collapsing binaries intact. Hence, for Carroll (2003) objections to eBooks are reactions to "a threat or disruption" to a "habituated behavior" of reading which is itself socially conditioned. Becoming posthuman may entail renouncing human agency, reducing it to a moment of reflection, or simply making it more situated. When becoming posthumans, we undergo changes perhaps including a more relational worldview, constant awareness of connectedness, an other-centric ethics of accountability, loss of fears of new technologies and social changes, and in some cases a passive rather than active stance towards the world. Donna Haraway rejects boundaries among humans, animals and machines on an assemblage-theoretic basis. People are always-already cyborgs, enmeshed in "messy' interactions inside nonhuman systems (Haraway [2003, p. 181). While Haraway now renounces the label "posthumanist' (2015, p. 161), she still writes of people as cyborgs . In her earlier work, she defined the human as an effect of binaries splitting "Man" from nature, resting on "othering" of animals, robots, nature, women, etc (Haraway [1985, pp. 28-30). Cybernetics leads outside such representational hierarchies. Monsters and cyborgs are valued for their transgression of binaries. Haraway's early work is driven by an ethicopolitical duty to cyborgise as part of a Manichean battle against "modernity". In her more recent work, "modernity"' is taken to have collapsed in ecological crisis, the main task now to recreate refuges for biodiversity and "make kin" (2015, p. 162).
Posthumanist-optimists generally endorse or assume cybernetics or close relatives (e.g. complexity theory and systems theory). Braidotti insists on the "primacy of intelligent and self-organizing matter" over human agency (2019, p. 31). Machines subvert binaries, producing direct relations (2013, p. 57); cybernetics is post-representational (Braidotti, 2013., p. 59). Hayles embraces a vision of complexity as recursive reapplication of simple rules to simple nodes (1999, p. 285). Humans are to relinquish control to distributed cognition of automated systems (Hayles, 1999, p. 288). Wolfe seeks a "self-referential autopoiesis" in which self-referential closure of individuals enables social determination and thus systemic complexity (Wolfe, 2010, p. xxi). Some posthumanists emphasise the distinctness of human embodiment, which is both like and unlike machines (Hayles, 1999, pp. 283-284;Wolfe, 2010, p. xxiii;Ferrando, 2013, p. 32). However, their "body" tends to be a relational node in cybernetic networks, not an inner self or material substance.
An exception is Land ( )[1993, who values cybernetic systems because they slip out of human control. Cybernetics began as a control project, but has spun out of control (Plant and Land, 2014, p. 305). Today's political landscape therefore pits viral machinic forces against the "phobic resistance" and immunopolitics of reactionary humans (Land, 2014, p. 256). As viral forces triumph, "Humanity recedes like a loathsome dream" (Land, 2014, p. 261). People become hyper-diverse, fragmentary individuals, exceeding human limits and crossing identity-boundaries in a kind of cyberpunk utopia (Land, [1997, p. 456). Critics object that even ethicised posthumanist-optimism eliminates human agency (Fuller, 2000, p. 26;Nold, 2020). Similarly to humanist-optimists, assemblage-optimists (including some posthumanists) give technology agency in itself, channelling their desires through it. However, some posthumanists take a more critical and political stance, as outlined below.

Assemblage-tactical
Assemblage-tactical approaches use assemblage models but seek to maintain human agency. Most term themselves tactical rather than strategic, aligning with de Certeau's (1988[1980]) association of strategy with rigid hierarchies and tactics with micropolitical everyday resistance. Tactical media is based on de Certeau's theory and involves bricolage and detournement of technologies outside their usual assemblages. It refers to bottom-up, flexible, hybrid and provisional approaches (Garcia and Lovink, 2008). Assemblages are assessed in terms of how far they provide joyous experiences, empower individuals/collectives, equalise power, produce cooperation, etc. Technology is not neutral, but its impact depends on the assemblage it is part of. So some technologies can be reclaimed, hacked and repurposedtrack-jumping into more emancipatory assemblages. Selection distinguishes them from assemblage-optimists. They usually prefer an active "hacker" over a passive "consumer" role and emphasise participation (Richardson, 2003, pp. 347-350;Westerkamp, 2003, p. 261).
Poststructuralism underpins assemblage-tactical positions. For Deleuze and Foucault, "there is no need to uphold man in order to resist" (Deleuze, 1988, p. 92). Everything is part of assemblages, but resistance is possible based on desires, lines of flight, or forces of life entrapped within assemblages. Certain non-humanist machines are preferable (schizorevolutionary or active power for Deleuze; aesthetic self-constitution and self-care for Foucault). "The question concerns the forces that make up man: with what other forces do they combine, and what is the compound that emerges?" (Deleuze, 1988, p. 73). Cybernetics is treated as a disempowering economic machine fragmenting and recomposing labour (Deleuze, 1988, p. 131) or as channelling flows (Deleuze and Guattari, 1987, pp. 510-512), sometimes leading to machinic enslavement (see below). DeLanda (1991, p. 3) is as a "robot historian", focusing on machinic agency. He conceives the human-robot relation as symbiotic, with machines using humans for propagation. Preciado both opposes disciplinary control and manipulation of techno-bodies, and celebrates gender-bending, experimentation, and "molecular revolution" generated by the current "pharmacopornographic biocapitalism" (Preciado, 2013(Preciado, [2008), pp. 325, 166).
"Contestational robotics" applies tactical media theory to robot design and HRI, encouraging tinkerers to create simple robots or drones for purposes such as infiltrating securitised sterile zones inaccessible to human protesters, and spreading pamphlets. They call for "continuous development of tactics to reestablish a means of expression and a space of temporary autonomy within the. . . social" in the context of "advanced surveillance capabilities" (Critical Art Ensemble and Institute for Applied Autonomy, 2001, p. 115). Hacker accounts suggest empowering affects within certain technosocial assemblages, relative to everyday life. The Hacker Manifesto (The Mentor, 1986) depicts everyday life as a hellhole of status-competition, apathy, authoritarianism and sadism, with computers freeing the hacker. Hacker culture is often portrayed as a subversive counterculture involving gift economy and an ethos of sharing, non-instrumentality, and information freedom (Levy, 1984;Stallman, 2015;Reimens, 2002;Dasgupta, 2003, pp. 335-356), though some suggest it is recuperated in neoliberal cyberculture (Turner, 2006). Pirate radio, meshnets, FLOSS, anonymity software, hacklabs, and hacktivism fit into this model. STS scholars today are influenced by poststructuralism, and use assemblage approaches bridging optimist and tactical clusters. Examples include the Anglo-Foucauldian school, Actor Network Theory (ANT) and Object-Oriented Ontology (OOO). ANT explores relations in a network, foregoing explanation (Law, 2004, p. 157;Latour, 2005). It sees knowledgeproduction and practice as world-changing interventions, and rejects strong social constructivism, technological determinism, and essentialism (Law, 1991, p. 8;Nold, 2017, p. 24). Robots, like humans, are "actants". ANT's flat ontology tends towards assemblageoptimism and cybernetic soft power because of difficulty motivating choices among assemblages, embrace of manipulative power, and de-emphasising of human agency (Harman, 2018, pp. 136-139). However, ANT is critical of rendering the conditions of scientific/technological production invisible ("blackboxing") (Latour, 1999, p. 314). Deblackboxing technology is consistent with assemblage-tactical ideas of hacking and power within networks.
OOO (Harman, 2018) emphasises human humility before objects, particularly "hyperobjects" which are too big to know or control (Morton, 2013). While Harman's politics is reformist or quietistic, Bryant's discussion of eruptive rogue objects, including new technologies (Bryant, 2012), is more clearly assemblage-strategic. Anglo-Foucauldians like Rose et al. both argue for an assemblage ontology (2016, pp. 1, 3, 9) and call for humans to be "at the centre" of new technologies, and augmented rather than replaced (Rose et al., 2016, p. 3).
There is also a group of critical/radical posthumanists in the assemblage-tactical cluster, combining posthumanist, new materialist and ANT ideas with radical politics; embracing criticisms of mainstream posthumanism while salvaging the name and ontology: endorsing selection among technologies and assemblages. Nold criticises authors like Latour and Bennett, arguing that some objects should be excluded from social collectivese.g. nuclear reactors and surveillance systems (Nold, 2020). He calls for a "pragmatic coalition of human and nonhuman agencies" making targeted interventions (Rose et al., 2016). Cudworth and Hobden endorse posthumanism in the sense of assemblage enmeshment and antihumanism (Cudworth and Hobden, 2018, p. 5, 8), ethics of responsibility, rejection of purity (Rose et al., 2016, p. 156), and an affinity for complexity theory (Rose et al., 2016, pp. 13-15). However, they criticise emphasis on entanglement rather than power (Rose et al., 2016, p. 14). They posit selection as an ethical imperative to equalise and reduce harm. This involves lower-impact living, vegetarianism, everyday conviviality, anti-capitalism, and bottom-up community (Rose et al., 2016, pp. 137, 146-147, 151-152). Anarchists using cybernetics similarly try to unpack decentralising and self-organising aspects from the focus on control (Swann, 2018;Beer, 1959;Duda, 2013;Goodman, 2010).
Assemblage-tacticians are more prepared to denounce cybernetic power than assemblageoptimists. Can robots be used within empowering assemblages? Contestational robotics suggests they can, as demonstrated by robotic distribution of illegal abortion pills at a recent Irish protest (Press Association, 2018). Assemblage-tacticians have similar issues to humanist-pessimists, of which technologies are tactically progressive. By their implied criteria, a good use disrupts dominant systems, empowers disempowered people, and feels empowering. One problem in HRI is how to tactically "use" strong AI, without being used back, in the manner Galloway (2004) suggests already happens with ostensibly empowering protocols.

Assemblage-pessimist
Assemblage-pessimists are a smaller cluster of poststructuralists and anarchists who do not object to assemblages in principle, but believe present dominant assemblages are broadly disempowering, alienating, immiserating, and fragmenting. Assemblage-pessimists worry about elite control of new technologies, heavy military and surveillance use, the emergence of alien machinic perspectives, and a cluster of social, ecological, psychological and physical harms.
Assemblage-pessimists focus on "machinic enslavement": humans become cogs within social machines, or subordinate nodes in computer networks). Hence, "human beings... are constituent pieces of a machine that they compose among themselves and with other things (animals, tools), under the control and direction of a higher unity" (Deleuze and Guattari, 1987, pp. 456-457). Flows of desire are presubjective and not distinctly human. Social assemblages are assessed in terms of whether they subordinate/are subordinate to flows of desiringproduction (Deleuze and Guattari, 2004, p. 8). "Higher unity" distinguishes enslaving systems from free systems. Machinic enslavement 'makes desubjectivized flows and fragments' and then 'turns those subjects into component parts of machines (slave units in the cybernetic sense)" (Wark, 2017, p. 51). Personal aspects of life are thus rendered irrelevant. "Whether you are happy, whether you stutter, whether you are afraid of death or of old ageall this counts for nothing [. . .] On the contrary, it inconveniences. It makes too much 'noise' in the sense of information theory" (Guattari, 1996, p. 137). Galloway (2004Galloway ( , 2012 understands the Internet through Deleuze's control society theory, with interfaces operating similarly to control. Media are metonymical rather than indexical, with objects reduced to classes, and thereby blackboxed (Galloway, 2012, p. 9). Objects are manipulated to form a world (Galloway, 2012, p. 23). Baudrillard (1994, p. 81) sees the cybernetic order-the "code" -as seeking 'absolute control'. Reality is fragmented into 'simple elements' rearranged into binary oppositions and segmented performances, on the model of surveys (Galloway, 2012, pp. 83-84). Feedback systems are closed and tautological, lacking affective force, meaning, and reversibility (Galloway, 2012, p. 78), causing human suffering, and taking to its implosive conclusion the denial of symbolic exchange.
Cybernetics is seen as a conservative system, designed to suppress chaotic flows or capture data. The Invisible Committee see cybernetics as a conservative strategy to 'impede the spontaneously entropic, chaotic movement of the world' (Invisible Committee, 2014, p. 3). Gorrion (2012) sees apparatuses domesticating and containing underlying ontological chaos, through mechanisms of capture and machinic enslavement. Apparatuses are used strategically by human controllers, but tend to condition strategisers as well as users, making humans increasingly robot-like.

Conclusion
The growing social role of robots and human-technology symbiosis portends unknown, potentially radical, changes, but there is no single human perspective on this shift. Approaches largely cluster in six distinct sets, each with different paradigmatic assumptions. It is important to develop ways to test claims made by different approaches, instead of remaining within group-specific tautologies, as well as disembedding axioms masquerading as facts which do much of the work in several clusters. The purpose of this typology has been to elucidate a diversity of problem fields, explore paradigmatic assumptions and explanatory strength, and to signal some of the philosophical and political work these perspectives can do.
Humanists are prone to essentialism and human exceptionalism, which can exclude non-ideal humans, non-human beings and nature. Assemblage theories are prone to relativism. In combination with utopian optimism or dystopian pessimism, perspectives can suggest deterministic or nihilistic attitudes to emerging technologies. The strategic and tactical clusters open up more possibilities for political agency. There is also an element of futurology in mapping the field because epistemologies prefigure technology creation. It seems likely that we are approaching a new reality in which old categories fail, as in an experiment described by Kuhn (1962, pp. 62-64;Bruner and Postman, 1949), where viewers watching rapidly-spinning playing cards depicting black hearts could not discern them, instead labelling them red hearts or black clubs. Only with familiarity and slowdown were black hearts visible. Human-robot relations may be the black hearts of our time, with traits from different modelsthe benevolent tools of humanist-optimists, the dangerous inhuman systems of humanist-pessimists, the congealed labour of humanist-strategists, and the extimate others of assemblage theorycombined in as yet unknowable ways.