Abstract
Purpose
This paper aims to highlight the ethical implications of the adoption of Fourth Industrial Revolution (4IR) technologies, particularly artificial intelligence (AI), for humanity. It proposes a virtues approach to resolving ethical dilemmas.
Design/methodology/approach
The research is based on a review of the relevant literature and empirical evidence for how AI is impacting individuals and society. It uses a taxonomy of human attributes against which potential harms are evaluated.
Findings
The technologies of the 4IR are being adopted at a fast pace, posing numerous ethical dilemmas. This study finds that the adoption of these technologies, driven by an Enlightenment view of progress, is diminishing key aspects of humanity – moral agency, human relationships, cognitive acuity, freedom and privacy and the dignity of work. The impact of AI algorithms is also shown, in particular, is shown to be distorting the view of reality and threatening democracy, in part due to the asymmetry of power between Big Tech and users. To enable humanity to be masters of technology, rather than controlled by it, a virtues-based approach should be used to resolve ethical dilemmas, rather than utilitarian ethics.
Research limitations/implications
Further investigation is required to provide more empirical evidence of the harms to humanity of some 4IR technologies cited, such as virtual and augmented reality, manipulative algorithms and toy robots on children and adults and the reality of re-skilling where jobs are lost through automation.
Practical implications
This paper provides a framework for evaluating the impact of some 4IR technologies of humanity and an approach to resolving ethical dilemmas.
Social implications
Most of the concerns surrounding 4IR technologies, and in particular AI, tend to focus on human rights issues. This paper shows that there are other significant harms to what it means to be a human being from 4IR technologies that will have a profound impact on society if not adequately addressed.
Originality/value
The author is not aware of any other work that uses taxonomy of AI applications and their different impacts on humanity. The proposal to use virtues as a means to resolve ethical dilemmas is also novel in regard to AI.
Keywords
Citation
Peckham, J.B. (2021), "The ethical implications of 4IR", Journal of Ethics in Entrepreneurship and Technology, Vol. 1 No. 1, pp. 30-42. https://doi.org/10.1108/JEET-04-2021-0016
Publisher
:Emerald Publishing Limited
Copyright © 2021, Jeremy Burford Peckham.
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
The term, Fourth Industrial Revolution (4IR), was coined by Klaus Schwab and popularized in his book of the same title published in 2016. In that same year, the World Economic Forum, founded by Schwab, chose “Mastering the fourth Industrial Revolution”, as its theme at their Davos gathering. The term, often abbreviated to 4IR or referred to as Industry 4.0, is now in widespread use in business and political institutions. In his original paper published in Foreign Affairs, Klaus Schwab observes that:
We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.
It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres (Schwab, 2016).
The first Industrial revolution started in Britain around 1760 with the development of water and later, steam power, used to drive mills and eventually to create railways with steam powered engines. This brought about a revolution in manufacturing by increasing productivity, and it also improved transportation, both of goods and coal, the essential power behind steam. The development of electricity around 100 years later brought about a second revolution in industry, eventually allowing the development of more convenient sources of power to automate production. Electricity also crept into homes, becoming widespread as a means of heating and lighting. The 3rd Industrial Revolution was heralded by the invention of the computer in the 1960s. From its creation arose a whole new industry of electronics and Information Technology whose usefulness and adoption accelerated with the arrival of the world wide web.
The Forth Industrial Revolution, hereinafter referred to as 4IR, is less about the invention of some new technology, as happened with steam power and electricity. Rather, it is more about the unprecedented speed with which a wave of technologies are coming together, disrupting established business and manufacturing practices. It is also about the disruption that the adoption of new business models has already brought about, what Zuboff has called “Surveillance Capitalism” (Zuboff, 2019). These disruptions have far reaching consequences, not just for business and commerce but society as a whole. It is for this reason, along with a desire to remain competitive, that national governments have taken a keen interest in 4IR.
In this paper, we will outline the main technologies that are contributing to 4IR and the ethical issues that are arising from their deployment, especially the impact that they are having on society and what it means to be human.
Key technologies and developments
A cluster of technologies, such as AI, sensors and communications infrastructure like 5G, has converged to allow the creation of new ways of doing things. Smart cities are an example of how such technologies can be used to control traffic flow, alert authorities to empty rubbish bins when they are full, and spot potential criminal activity through facial recognition and gait analysis.
In the area of manufacturing, 3D printers can be used to create spare parts on demand or to fabricate a new product, tailored to the individual, like the insole of a shoe.
Biotechnology is also making great strides to improve health and lifespan. Visualisation of the body and internal organs in 3D, with augmented reality overlay, offers scope for more precise and less invasive surgery. Started in 1990, in just a few decades, the Human Genome Project has provided insight into the genes that cause diseases, while stem cell research holds out the prospect of growing cells in vitro to replace damaged cells in our bodies.
These are just a snapshot of some of the ways in which clusters of technologies are transforming our world. The main technologies that are contributing to 4IR along with example applications are shown in Table 1.
Although there are a number of different technologies contributing to 4IR, many regard AI as being at the heart of the disruption to business practice and society as a whole. Not with standing the fact that the AI has been around for several decades (Figure 1). The key to its current usefulness and rapid uptake lies, not so much in new insights or discoveries in AI, nor how to model the brain, but in the convergence, in the past decade, of greater computing power and memory, along with the availability of huge amounts of free data. The development by Google, in the early 2000s, of a new business model that could turn a profit from the digital exhaust of our browsing activity, heralded the age of free data, along with the angst that many now have about data privacy [1].
Data is now regarded by many as the “new oil” and the backbone of the digital economy. It is also crucial for training AI algorithms, but the performance and usefulness of AI applications in the past was hampered by a lack of such training data. Business models that collect vast amounts of data, without it being paid for, together with massive data farms to collect and store private data such as medical records, have been a boon for AI developers.
While Moore’s predictions about the doubling of computer power and memory size every 18 months have largely held true to date, we are currently reaching the limits of what can be achieved with silicon chips. Quantum computing may be the next revolution in computer technology but probably only in areas where it massively outperforms conventional computers at certain types of computation and problem-solving. For the foreseeable future, Quantum computing, so far, the preserve of only a few research labs around the world, is unlikely to have significant impact in the 4IR.
Of much greater immediate impact is the fusion of current technologies and computing capabilities that has allowed technology to encroach on civilisation at a speed unknown in our history. This has raised a number of ethical issues and concerns.
Ethical challenges
Previous Industrial Revolutions raised their own ethical challenges at the time, particularly the replacement of skilled work, like weaving, with more efficient mechanical looms, and the resulting exploitation of women and children in the unskilled labour force required to operate the looms. It took some time before these ethical issues were addressed, but automation increased productivity and created a whole new sphere of skilled jobs, such as accounting and management.
The technologies of the 4IR, along with the power of Big Tech behind their deployment, raise ethical issues that go beyond the future of employment; they strike at the heart of what it means to be human. As Klaus Schwab puts it:
I am a great enthusiast and early adopter of technology, but sometimes I wonder whether the inexorable integration of technology in our lives could diminish some of our quintessential human capacities, such as compassion and cooperation. Our relationship with our smartphones is a case in point. Constant connection may deprive us of one of life’s most important assets: the time to pause, reflect, and engage in meaningful conversation.
Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries (Schwab, 2016)1.
There are several areas of our humanness that are being impacted and disrupted by 4IR technologies and their use. These are (Peckham, 2021):
Cognitive acuity
When AI learns and carries out skilled tasks that humans perform, reliance on automation leads to a loss of reasoning power, decision-making acuity and creativity.
Ability to relate to others
Over engagement with digital assistants, robot toys, health-care robots and the use of sex robots fosters personification of artefacts and the development of non-human relationships that alters our ability to maintain or form true relationships with other humans.
Our children’s emotional and social growth is stunted and their ability to empathise is diminished along with the emotional maturity needed in normal human relationships and social interactions.
Personification of artefacts leads to feelings of ethical obligation and the desire to assign rights to personified artefacts amounting to idolatry.
Freedom and privacy
Results from the state’s surveillance of its citizens whether through facial recognition and other traits or the amassing of private data for running smart cities.
Freedom and privacy is lost due to the even greater amassing and processing of personal data by Big Tech for profit without any real choice for consumers.
The free product or service offering model is an abuse of power because consumers are seduced by Big Techs offerings without informed consent to their data use, that in any event would be impractical.
Moral agency
When we assign moral agency to a robot, such as a self-drive vehicle, to make moral decisions on our behalf we effectively delegate a responsibility that is uniquely human.
Loss of work
The dignity of work is taken away as jobs are partially or completely replaced by AI and robots, except where the work is hazardous.
What is real
A loss of a sense of what is real through the blending of physical and virtual worlds. Immersion in virtual and augmented reality could lead to addiction, a loss of self-discipline, self-determination and control with a resulting loss of true community from isolation and virtual relationships.
The value of life, life expectancy, mortality
A loss of the sense of our mortality as age is extended and health improved. Genetic Engineering questions the value of life and seeks to play God.
Equality of access
One of the key concerns surrounding 4IR is the inequality that is likely to result from the disruption in the labour markets and the inability of poorer nations to access and deploy 4IR technologies. The Covid-19 pandemic brought this into sharp focus, as western countries deployed tracking apps and robots to carry out potentially dangerous tasks.
Robots were able to handle hazardous tasks such as disinfecting areas with ultra violet light. The Wuchang field hospital in China used robots donated by CloudMinds to carryout tasks such as taking temperature, delivering meals, as well as collecting old bedsheets and disposing of medical waste (Cooney, 2020).
Although people were allowed to visit stores during the pandemic, for essential items such as food, online shopping increased dramatically. This spurred on the deployment of robots for tasks such as stock picking that could avoid humans having to work in the same space. Some have argued that the pandemic will accelerate the take up of robots in many different industries, adding further pressure to the employment prospects of furloughed workers, as countries return to normal working (Nichols, 2020).
Disruption of labour markets
In 4IR, the impact on work broadly splits between the effects on manual and skilled work, often referred to as blue- or white-collar workers. Mechanical automation tends to replace manual work today, rather than skilled work, that occurred in the first Industrial Revolution, although those boundaries are diminishing. AI is now impacting human acquired skills, such as driving a vehicle or piloting a drone. Medical robots, still operated by experts, are assisting in delicate operations. At what point might a machine be able to totally replace a highly skilled surgeon for such tasks?
Many cognitive or skilled tasks, previously carried out by humans, are now being performed by AI algorithms, from data analysis in accountancy to medical image interpretation. Even areas that we would have thought of as creative, such as journalism, are impacted by software that can compile news reports from basic facts. Other creative areas, such as music composition or art, are not untouched by developers’ aspirations to stretch the boundaries of what can be done.
Oxford Economics estimate that some 20 million jobs will be lost by the end of 2030 due to the displacement of jobs by robots in manufacturing (Lambert and Cone, 2019). A global study, conducted by McKinsey Global Institute, puts the number of jobs affected by automation, not just in manufacturing, at between 75 million and 375 million by 2030 – around 3%–14% of the global workforce (Manyika, 2017).
These scenarios are based on an analysis of what jobs could be automated by known technology, although, as Brynjolfsson has commented, many jobs carried out by skilled workers may be preserved, with automation improving their productivity (Brynjolfsson, 2018). In their analysis, Manyika et al., show that while new jobs will be created as automation progresses, typically they will require both technical and soft skills, with the soft skills being required for the service and care industries.
In the Oxford Economics study, the value created by robots, in terms of efficiency, increased gross domestic product (GDP) and profit, will offset the impact on employment, referred to as the “robotics dividend”. This, it is argued, will create more jobs, as demand and spending increase, due to falling prices, increased income and higher taxes.
Looking at automation generally, rather than robotics on its own, the McKinsey study also predicts growth in employment, with up to 250 million new jobs being created by 2030 from new and additional work required to service demand for products and services. The McKinsey models show that rising incomes will, by a significant margin, contribute the most to creating these new jobs, followed by ageing health care. These jobs are widespread with some manual and low-skilled work in construction, catering and hospitality but many requiring higher educational qualifications and skills (Brynjolfsson et al., 2018, p. 64).
This is not a happy outcome for those that might lose their jobs and who will find it hard or impossible to re-skill. Many in low-skill jobs may not possess the soft skills required in the service industry. In times past workers migrated from agriculture to manufacturing and those that have been displaced from manufacturing have had to find work in the service sector. While it is true that the service sector has grown, this growth masks the fact that many who are having to transition, may not have the ability to re-skill for the jobs created. Most studies agree that the real challenge will be in re-skilling the workforce and helping it to transition from one job to another. Despite this reality, most Organisation for Economic Cooperation and Development countries have had a declining spend on training over the past 20 years. Even with education being provided for most, up to pre degree or vocational training level, will those left without work be able to re-train or improve their education to a level needed by these new jobs?
Democracy under threat
The riots on Capitol Hill, WA on 6 January 2021, surround the fiercely contested election of the 46th President of the USA, are the culmination of years of our obsession with digital platforms such as social media. How have we got to such a place where a country is so divided against itself and each side is convinced that they hold the truth? It is not just America that has the problem, communities around the world are increasingly divided and views polarised by multiple versions of the “truth”.
Social media promises to be our friend, to connect us to the world – what is not to like? Yet it all too easily ends up sucking us into a virtual world that separates us from reality. How is that possible, how can a civilised society be so gullible? It is all down to the machine learning algorithms at the heart of social media and other digital platforms that learn from our every word and click online. They are designed to draw us in by nudging, suggesting and filtering our news feeds, all to the end of increasing what the marketeers call “engagement”. If Capitol Hill teaches us one thing, it is that these platforms are not really our friends!
On the surface the free connectivity that social media platforms provide seems benign, but there is no such thing as a free lunch. The underlying business model that pays for our connecting to others, is advertising. This might seem irritating rather than corrupting, but the truth is that profits stem from increased user engagement and the algorithms used to achieve it don’t care if we are viewing fake news, or are being radicalised in the process. This is the dark side of AI use.
Based upon an understanding of human psychology, these algorithms learn from our online activity how to nudge us in directions that are likely to achieve greater engagement – leading us down rabbit holes, sucking us into someone else’s view of what is truth. The more we engage, the more we are sucked into a group with similar views, isolating us from other views – ultimately, cutting us off from objective reality. This social bubble and the news feeds that it generates, becomes our reality.
We have lost the art of debate and negotiation that can only genuinely take place when we are physically in the same place in ones and twos or small groups. The virtual world has taken this away while deluding us into thinking that we are more connected to each other than ever before.
Social media insulates us from the patience and commitment that is required of real and authentic relationships. It insulates us from the messiness that is a natural part of genuine relationships, forged over time, with the inevitable disagreements and misunderstandings, that are part of our being human. Social media feeds our egos and exploits our darker sides. These virtual worlds that we inhabit are gradually destroying our soul, and ultimately, our humanity.
An asymmetry of power
As in previous Industrial Revolutions, commerce is largely in the driving seat, but this revolution is driven mostly by a small number of Big Tech companies, such as Google, Amazon and Apple in the USA and TenCent, Baidoo and Alibaba in China. A host of smaller specialist companies, spanning technologies, such as AI, biotechnology and cryptocurrency, are also players but are often bought out by Big Tech.
These Big Tech companies have values greater than the GDP of smaller countries, and this, together with a global reach, affords them unusual power and an ability to be in control. This is marketed to consumers as bettering their lives and their world. Yet, in reality, business is about driving profits and market share, often at the expense of consumers, by creating addiction to the technology. Ask yourself or a colleague if you are willing to give up Facebook or Google search! Some are reckoned to spend more time interacting with Alexa than with their spouse (Levy, 2016).
This asymmetry of power, between Big Tech and consumers, has effectively deprived consumers of freedom and privacy, leading to a situation where some companies know more about them than they do! This will likely get worse as 4IR progresses without government intervention because companies are unlikely to self-regulate. The convergence of increased computing power, vast memory and data have led to a rapid development of personalised products and services, and artefacts that simulate increasingly human like behaviour. All of this is designed to suck the consumer in to the services and products on offer.
John Havens in his book, Heartificial Intelligence, makes an important point when he suggests that:
A majority of AI today is driven at an accelerated pace because it can be built before we decide if it should be (Havens, 2016).
Need for agile governance and leadership
Governments have been left struggling to keep up, with regulation falling way behind where it needs to be, to protect humanity. The incessant lobbying of powerful vested interests seeks to keep legislation light to enable unfettered access to the markets that Big Tech needs to thrive. With most nations now scrambling to develop regulatory frameworks for this tech revolution, Mark Zuckerman pointed out, in his interview with EU Commissioner Thierry Breton, that unless the West develops a framework for the internet and digital world, then China will do so. He reminded the audience that we have very different value systems.
Most governance institutions are bureaucratic, but given the pace of change in 4IR, driven by Big Tech, it needs to become much more agile. Even companies, normally more responsive to trends than public institutions, are having to learn to adapt to rapid changes in manufacturing, new ways of servicing existing needs and consumer demand. Along with all of this comes increasingly sophisticated threats to cybersecurity and a host of allied problems such as fake news and hacking. The stochastic nature of mainstream AI algorithms, in applications such as facial recognition and decision support, is open to bias and a lack of transparency, creating problems for fairness and justice, particularly for some ethnic groups.
This new Industrial Revolution will tax the leadership of industry and business itself, requiring new skill sets, courage and flexibility to ensure that companies protect privacy, ensure fairness and yet remain competitive and alert to change.
Perhaps even bigger than the impact on labour markets and business itself, is the challenge that 4IR represents to our identity, what it means, fundamentally, to be a human being.
Challenging what it means to be human
As technologies simulate more and more human capabilities, the danger is that we come to rely on them and in so doing, dumb down our true humanity. Authentic relationships are diminished as we lose capacity to empathise, cognitive acuity is lost the more we look to machines to make decisions, and ultimately, as with self-drive vehicles, we hand over moral agency, a trait unique to humans.
As data is the new currency of 4IR, freedom and privacy have been lost, even as the EU seeks to lead the world with tougher data privacy laws. Without a fundamental challenge to the business model that provides free services and products in exchange for data, the battle is likely to remain lost.
The rapid ascendency of this model has left public institutions and other private corporations believing that they now have a right to our data and the right to use it as they see fit. The rise of cryptocurrency on the back of blockchain technology provides a further challenge to our privacy and freedom as central banks look to control the sector through insisting on their own, compulsory digital currency. Were such a development to take place, all currency activities would be recorded centrally and linked to our identity, effectively making it impossible to engage in any economic activity without the state knowing. A situation no different to China’s state control of its citizens now.
Lure of progress
Behind the seduction of digital technology and AI, is the Enlightenment idea, that progress is good and progress is driven by science and technology. The Age of Enlightenment began in the 18th century in Europe and gradually spread around the world, fuelling the Industrial Revolution and the free market economies of the West. Human reason was seen as the source of knowledge, and advancement and progress would be achieved through scientific discovery and empiricism. French philosophers championed the idea of individual liberty and the separation of the state from religion.
Today, science and technology are widely seen the drivers of progress, progress that will allow humanity to flourish. These ideas are embedded in much of our thinking and behaviour towards new technology. New is better than the old – we have all watched the queues for the latest iPhone, fan-fared as “the best iPhone we have produced”.
It is not surprising therefore that there is an implicit assumption that the technologies behind 4IR are good, that they will make our lives easier and more comfortable, and that they will enable humanity to flourish. Businesses strive for greater efficiencies, we become people driven by what is convenient, without ever asking, what are we loosing and what is this technology doing to us.
Taken to its extreme, the Transhumanist philosophy that many leaders of Hi-tech companies subscribe to, is nothing less than the transformation of the human condition through technology, including AI. Followers of this philosophy see the potential for humanity to be transformed into different beings, Posthumans, with greater abilities than mere humans, even potentially defying death through genetic engineering, drug therapy or uploading one’s brain.
Losing consciousness
An assumption that technology represents progress and that progress must be good has dulled our consciousness of whether its right. We engage with social media, the internet, online shopping, smart cities and the latest gadgets, without ever pausing to think about what it might be doing to our humanity, or how it might be changing our behaviour and relationships.
The fast pace of change is making us breathless and restless for the next new thing, so that we expect to move from job to job and even relationship to relationship, looking for something new, something better, something that will leave us more fulfilled.
Whether we like it or not, digital technology, in its various guises, is forming us and shaping who we are, especially the more human like it becomes. Applications such as digital assistants become habit forming without us really being aware of it. Ultimately, digital technology is alienating us from some part of our lives – our real humanness. It is shaping our sentiments and what we love, almost without us being aware, because everyone else is caught up in it. It has become a mediator between us and others, between us and our world, it has become a digital priesthood.
The more humanlike and convenient technology becomes, the more it erases the distinction between online and offline, embodied presence and virtual. At the same time, it is creating an illusion of more control of our lives and our digital world. Yet the evidence is that this technology is already beginning to control us, children find it hard to take off the “lens” through which they see and interact with the world. Digital technology, and increasingly AI, is their world. This technology has become another priesthood, a mediator through which we interact with other people and through which we understand our world. Many have become reliant on this technology and are uncomfortable when it is taken away, finding themselves insecure and struggling emotionally to deal with people face to face. We have slipped into a digital bondage and become slaves to our digital world. The role of Big Tech and the state in depriving us of freedom and privacy amounts to no less than digital totalitarianism.
How then should we respond to these challenges?
How to respond?
We need to step back, pause and regain consciousness of what is happening around us. Not necessarily to discard new technology, but rather to engage it with informed minds. Minds that have a clear view of whether it is helping or hindering our humanity.
It is time for us to realise that digital technology and AI in particular is having a profound effect on our souls and is leading us into captivity to the artefacts that we have made. The multitude of ethical guidelines being produced around the world are not going to provide the answer. Rather we need to reclaim our souls by setting boundaries for our engagement with technology. Key questions that we must ask ourselves are – What is technology doing for us, what is it doing to us, what is gained and what is lost, why this and not that?
If we want to preserve the uniqueness of our humanity, we need a clear idea of what it means to be a human being. Those from a Judea–Christian tradition will point to the ancient scriptures where God announces – “Let us make man in our image, after our likeness” [2] – what theologians call “Imago Dei”. What this really means has intrigued philosophers and theologians from ancient times. Tertullian saw free will as the essential mark or stamp of the divine image. Augustine suggested that God and humans share some ontological component, trait or quality that essentially defines us: memory, intelligence and will. In the 16th century, the reformers added our natural affections as an attribute of the God who made us.
The traditions of Western democracies were founded on these ideas and enshrined in various aspect of current law, from human rights to privacy. The concept of fairness and justice for all owes its origins to the teaching of the bible and the concept of a just God. These are all themes that are picked up by many activists around the world, of various faiths or no faith, when dealing with 4IR technologies and their impacts on individuals, marginalised groups and society as a whole.
Yet not everyone agrees about what a human being is, some see us as functionally no more than sophisticated computers akin to the algorithms (software programs) that comprise AI. The bleak view of historian and popular author Yuval Harari suggests that:
Over the last century, as scientists opened up the Sapiens black box, they discovered there neither soul, nor free will, nor ‘self’ – but only genes, hormones and neurons that obey the same physical and chemical laws governing the rest of reality (Harari, 2016b).
In this “cause and effect” view of Harari, decisions are determined by prior events and there is no “free will” to choose. If this view is right, then computers can emulate humans, because we are deterministic physical entities that respond programmatically to external stimuli.
Neuroscientist and philosopher Sam Harris also argues against free will in his book Free Will, stating:
Free will is an illusion. Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we’re unaware and over which we exert no conscious control. (Harris, 2012).
Harris’s dismissal of free will and his resulting conclusions about morality are illogical when he suggests that the absence of free will “need not” entail the end of morality, and that “[w]hat we condemn most in another person is the conscious intention to do harm” (Harris, 2012, p. 52). Really! If we have no free will, then we have no ability to decide to do what is right: we are merely agents of chance. If we are no more that genes, hormones and neurons, why would we care what 4IR technologies do to us, or what we might lose?
The problem with this debate is that many neuroscientists and others start with the premise that free will, if it exists, cannot be metaphysical, and in order to debunk this idea try to show from neuroscience that the brain is only material. This is simply a straw man, designed to prove that materialism is all that there is. Free will is recognized by many to be foundational to moral responsibility, regardless of whether one thinks morality has external agency or not.
The debate about what we may or may not be able to create in AI is influenced by our view of what it means to be human. What we think about free will has a significant bearing on how we regard the prospects for AI. As Harari points out:
Doubting free will is not just a philosophical exercise. It has practical implications. If organisms indeed lack free will, it implies that we can manipulate and even control their desires using drugs, genetic engineering or direct brain stimulation (Harari, 2016a).
Giving up on free will means that some will be happy to transfer authority to a machine or computer and let it make the decisions – as AI becomes more intelligent than us, we should let it make the decisions. This is an argument in support of self-drive vehicles. It is argued that as machines operate far more safely than humans do, these vehicles will significantly reduce accidents caused by human error or fatigue. Effectively, when we get into a self-drive taxi or our own self-drive vehicle, we’re handing over authority to that vehicle. Is it too much of a stretch to suggest that even participating in a dating application, where algorithms provide a best-match partner for us, is a move in the direction Harari is envisioning?
For those of us who take a different view of what it means to be human, we must wrestle with, and act upon the implications of 4IR technologies on us, if we are to preserve these unique aspects of our humanity. The public debate stimulated by documentaries, like The Social Dilemma [3] and Coded Bias [4], suggest that a good number of people do care about the impact of technology, and AI in particular, on humanity.
When thinking about what to do when AI applications throw up ethical dilemmas, I propose that virtue should shape our response. The idea that good societies are virtuous has a long history stretching back millennia to the ancient scriptures of the bible. Plato proposed that there were four cardinal virtues on which the character of a good city hinges – prudence (or wisdom), justice, temperance (or self-control) and courage (Plato) [5]. To these Christians added love – itself a key characteristic of what it means to be a human being. Although framed slightly differently, the idea of preserving a virtuous society is highlighted by the Institute for Electrical and Electronic Engineers (IEEE) Standards Committee for Ethical Design in AI (IEEE, 2019).
The virtue-based process I am advocating, shown in Figure 2, provides a framework for resolving tensions between preserving aspects of human nature such as freedom, and other laudable goals such as protecting citizens. It is a process that builds on our analysis of the impact of technology on personhood and then allows us to determine what to do.
Let us look at how this might work out in practice. Predictive policing, using Machine Learning and other AI applications, is clearly something that police forces, and even government, might deem to be cost effective policing. Such technology amplifies the resources available for tracking terrorists or criminals, and potentially provides safer societies, as no doubt the Chinese Government would argue. Yet the down side is wrongful arrests, loss of privacy and freedom of movement, as the Uygars in China have found. What is the virtuous way forward?
Justice and wisdom would suggest that the downsides outweigh the benefits, because it is better to have a free society that does not fear the state and the potential for wrongful arrest. Millions of Uygars are persecuted in China, as a result of mass surveillance, all in the name of preventing terrorism. Is it wise to allow ourselves to slip down that path?
Courage would be required to follow the path of wisdom and justice and decide not to adopt such technology. These ultimately are decisions of the state, but citizens need to express their views and in many Western countries, these are being solicited. Ultimately, we may be treading a path that results in more, rather than less, terrorist threats or more criminal activity, but that is a balancing price to pay, and one that many societies may actually prefer over increasing state surveillance and control.
The Fourth Industrial Revolution is well under way and so far, we have been slow to deal with the ethical challenges that it is presenting. This in part is due to the inertia of governments and regulators, seeking to balance public harms against economic progress. Perhaps to a large degree, the lobbying and power of large corporations with vested interests creates the biggest stumbling block to putting our humanity first. As Shosanna Zuboff has commented, “We can have democracy, or we can have a surveillance society, but we cannot have both” (Zuboff, 2021). It is my hope that this paper will contribute constructively towards the ongoing debate about the ethical issues surrounding the adoption of 4IR technologies and business models.
Figures

Figure 2.
A framework for evaluating the impact AI applications on personhood and what to do (from Peckham, 2021) – while the focus is on AI, this approach has application to any of the 4IR technologies)
Main technologies that are contributing to 4IR
Technology | Example applications |
---|---|
3D printing | Adidas scans your gait and styles a shoe just for you |
AI | Facial recognition used to open your smart phone or for mass surveillance |
IoT | Fridge connected to the internet to reorder contents when used |
Robotics | Autonomous vacuum cleaners, stock or fruit pickers |
Biotechnology | Growing replacement organs |
Materials Science | Lighter and stronger materials |
Quantum Computing | Modelling the human brain |
Energy Storage | Electric cars |
Blockchain | Crypto Currency |
Notes
For an extensive analysis of this thesis, see: Zuboff, S. (2019).
Genesis 1:26, English Standard Version of the bible.
Plato, The Republic, Book IV.
References
Brynjolfsson, E., Mitchell, T. and Rock, D. (2018), “What can machines learn, and what does it mean for occupations and the economy?”, AEA Papers and Proceedings, Vol. 108, pp. 43-47.
Cooney, C. (2020), “Coronavirus hospital ward staffed by robots opens in Wuhan to protect medics”, New York Post, 10 March 2020, available at: https://nypost.com/2020/03/10/coronavirus-hospital-ward-staffed-by-robots-opens-in-wuhan-to-protect-medics (accessed 1 May 2020).
Harari, Y. (2016a), Homo Deus, Random House, p. 286.
Harari, Y. (2016b), Homo Deus: A Brief History of Tomorrow, Harvill Secker, London, p. 329.
Harris, S.S. (2012), Free Will, Free Press, New York, NY, p. 5.
Havens, J. (2016), Heartificial Intelligence – Embracing Our Humanity to Maximise Machines, Penguin, New York, NY, p. 72.
IEEE (2019), “The IEEE global initiative on ethics of autonomous and intelligent systems”, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, available at: https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html (accessed 1 August 2019).
Lambert, J. and Cone, E. (2019), “How robots change the world”, Oxford Economics, June 2019, p. 21.
Levy, H. (2016), “Gartner predicts a virtual world of exponential change”, Smarter with Gartner, 18 October 2016.
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R. and Sanghvi, S. (2017), “Jobs lost, jobs gained: workforce transitions in a time of automation”, McKinsey Global Institute, December 2017, pp. vi, 2, 11 and 77.
Nichols, G. (2020), “Robots are taking over during Covid-19 (and there’s no going back)”, ZDNet, 29 April 2020, available at: https://www.zdnet.com/article/robotics-firms-seeing-strong-backing-during-covid-19-pandemic/ (accessed 4 May 2020).
Peckham, J. (2021), Masters or Slaves? – AI and the Future of Humanity, IVP London.
Schwab, K. (2016), “The fourth industrial revolution: what it means, how to respond”, World Economic Forum, available at: https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ (accessed 28 April 2020).
Zuboff, S. (2019), The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Profile Books, London.
Zuboff, S. (2021), “The coup we are not talking About”, The New York Times, January 29, 2021.