Search results

1 – 10 of over 2000
Article
Publication date: 30 October 2018

Phil Torres

This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks – the “Great Challenges”…

Abstract

Purpose

This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks – the “Great Challenges” – that deserve our immediate attention, namely, environmental degradation, which includes climate change and global biodiversity loss; the distribution of unprecedented destructive capabilities across society by dual-use emerging technologies; and value-misaligned algorithms that exceed human-level intelligence in every cognitive domain. After examining each of these challenges, the paper then outlines a handful of additional issues that are relevant to understanding our existential predicament and could complicate attempts to overcome the Great Challenges. The central aim of this paper is to constitute an authoritative resource, insofar as this is possible in a scholarly journal, for scholars who are working on or interested in existential risks. In the author’s view, this is precisely the sort of big-picture analysis that humanity needs more of, if we wish to navigate the obstacle course of existential dangers before us.

Design/methodology/approach

Comprehensive literature survey that culminates in a novel theoretical framework for thinking about global-scale risks.

Findings

If humanity wishes to survive and prosper in the coming centuries, then we must overcome three Great Challenges, each of which is sufficient to cause a significant loss of expected value in the future.

Originality/value

The Great Challenges framework offers a novel scheme that highlights the most pressing global-scale risks to human survival and prosperity. The author argues that the “big-picture” approach of this paper exemplifies the sort of scholarship that humanity needs more of to properly understand the various existential hazards that are unique to the twenty-first century.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 29 November 2018

Karin Kuhlemann

This paper aims to consider few cognitive and conceptual obstacles to engagement with global catastrophic risks (GCRs).

Abstract

Purpose

This paper aims to consider few cognitive and conceptual obstacles to engagement with global catastrophic risks (GCRs).

Design/methodology/approach

The paper starts by considering cognitive biases that affect general thinking about GCRs, before questioning whether existential risks really are dramatically more pressing than other GCRs. It then sets out a novel typology of GCRs – sexy vs unsexy risks – before considering a particularly unsexy risk, overpopulation.

Findings

It is proposed that many risks commonly regarded as existential are “sexy” risks, while certain other GCRs are comparatively “unsexy.” In addition, it is suggested that a combination of complexity, cognitive biases and a hubris-laden failure of imagination leads us to neglect the most unsexy and pervasive of all GCRs: human overpopulation. The paper concludes with a tentative conceptualisation of overpopulation as a pattern of risking.

Originality/value

The paper proposes and conceptualises two new concepts, sexy and unsexy catastrophic risks, as well as a new conceptualisation of overpopulation as a pattern of risking.

Article
Publication date: 11 October 2018

James Daniel Miller

The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this…

198

Abstract

Purpose

The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks.

Design/methodology/approach

This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils.

Findings

Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us.

Originality/value

Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Content available
Article
Publication date: 9 April 2019

Olle Häggström and Catherine Rhodes

844

Abstract

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Book part
Publication date: 6 September 2021

Christian Fuchs

In 2020, the coronavirus crisis ruptured societies and their everyday life around the globe. This chapter is a contribution to critically theorising the changes societies…

Abstract

In 2020, the coronavirus crisis ruptured societies and their everyday life around the globe. This chapter is a contribution to critically theorising the changes societies have undergone in the light of the coronavirus crisis. It asks: How have everyday life and everyday communication changed in the coronavirus crisis? How does capitalism shape everyday life and everyday communication during this crisis?

This chapter focuses on how social space, everyday life and everyday communication have changed in the coronavirus crisis.

The coronavirus crisis is an existential crisis of humanity and society. It radically confronts humans with death and the fear of death. This collective experience can on the one hand result in new forms of solidarity and socialism or can on the other hand, if ideology and the far-right prevail, advance war and fascism. Political action and political economy are decisive factors in such a profound crisis that shatters society and everyday life.

Expert briefing
Publication date: 30 April 2021

Plaintiffs range from users, customers, app developers, investors, competitors, employees (current and former), law enforcement and tax agencies. They are seeking redress…

Details

DOI: 10.1108/OXAN-DB261220

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 8 October 2018

Karim Jebari and Joakim Lundborg

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the…

439

Abstract

Purpose

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.

Design/methodology/approach

This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.

Findings

Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.

Originality/value

If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 5 September 2018

Alexey Turchin and Brian Patrick Green

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands…

Abstract

Purpose

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases and other risks. This paper aims to explore how to use the advantages of islands for survival during global catastrophes.

Design/methodology/approach

Preliminary horizon scanning based on the application of the research principles established in the previous global catastrophic literature.

Findings

The large number of islands on Earth, and their diverse conditions, increase the chance that one of them will provide protection from a catastrophe. Additionally, this protection could be increased if an island was used as a base for a nuclear submarine refuge combined with underground bunkers and/or extremely long-term data storage. The requirements for survival on islands, their vulnerabilities and ways to mitigate and adapt to risks are explored. Several existing islands, suitable for the survival of different types of risk, timing and budgets, are examined. Islands suitable for different types of refuges and other island-like options that could also provide protection are also discussed.

Originality/value

The possible use of islands as refuges from social collapse and existential risks has not been previously examined systematically. This paper contributes to the expanding research on survival scenarios.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 15 May 2021

Konrad Szocik and Rakhat Abylkasymova

Current covid-19 pandemic challenges health-care ethics. Ones of the most important challenges are medical resources allocation and a duty to treat, often addressed to…

Abstract

Purpose

Current covid-19 pandemic challenges health-care ethics. Ones of the most important challenges are medical resources allocation and a duty to treat, often addressed to medical personnel. This paper suggests that there are good reasons to rethink our health-care ethics for future global catastrophic risks. Current pandemic shows how challenging can be an issue of resources allocation even in a relatively small kind of catastrophic event such as covid-19 pandemic. In this paper, the authors show that any future existential bigger catastrophe may require new guidelines for the allocation of medical resources. The idea of assisted dying is considered as a hypothetical scenario.

Design/methodology/approach

This is a conceptual work based on conceptual analysis at the intersection of risk studies, health-care ethics and future studies. This study builds the argument on the assumption that the covid-19 pandemic should be treated as a sort of global catastrophic risk. Findings show that there are no such attempts in currently published peer-reviewed academic literature. This is crucial concept for the meta-analysis. This study shows why and how current pandemic can be interpreted in terms of global catastrophic risk even if, literally, covid-19 does not meet all criteria required in the risk studies to be called a global catastrophe.

Findings

We can expect an emergence of discriminatory selection policy which will require some actions taken by future patients like, for example, genetic engineering. But even then it is inevitable that there will still be a large number of survivors who require medical assistance, which they have no chance of receiving. This is why this study has considered the concept of assisted dying understood as an official protocol for health-care ethics and resources allocation policy in the case of emergency situations. Possibly more controversial idea discussed in this paper is an idea of assisted dying for those who cannot receive required medical help. Such procedure could be applied in a mass-scale during a global catastrophic event.

Research limitations/implications

Philosophers and ethicists should identify and study all possible pros and cons of this discrimination rule. As this study’s findings suggested above, a reliable point of reference is the concept of substantial human enhancement. Human enhancement as such, widely debated, should be studied in that specific context of discrimination of patients in an access to limited medical resources. Last but not least, scientific community should study the concept of assisted dying which could be applied for those survivors who have no chance of obtaining medical care. Such criteria and concepts as cost-benefit analysis, the ethics of quality of life, autonomy of patients and duty of medical personnel should be considered.

Practical implications

Politicians and policymakers should prepare protocols for global catastrophes where these discrimination criteria would have to be applied. The same applies to the development of medical robotics aimed at replacing human health-care personnel. We assume that this is important implication for practical policy in healthcare. Our prediction, however plausible, is not a good scenario for humanity. But given this realistic development trajectory, we should do everything possible to prevent the need for the discriminatory rules in medical care described above.

Originality/value

This study offers the idea of assisted dying as a health-care policy in emergency situations. The authors expect that next future global catastrophes – looking at the current pandemic only as a mild prelude – will force a radical change in moral values and medical standards. New criteria of selection and discrimination will be perceived as much more exclusivist and unfair than criteria applied today.

Details

International Journal of Human Rights in Healthcare, vol. 15 no. 4
Type: Research Article
ISSN: 2056-4902

Keywords

Book part
Publication date: 15 July 2020

Keith A. Abney

New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise…

Abstract

New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise and, in the absence of settled governance, may be resolved by force, as is typical for new frontiers. But the terrestrial assumptions behind the ethics of war will need to be rethought when the context radically changes, and both the environment of space and the advent of robotic warfighters with superhuman capabilities will constitute such a radical change. This essay examines how new autonomous technologies, especially dual-use technologies, and the challenges to human existence in space will force us to rethink the ethics of war, both from space to Earth, and in space itself.

1 – 10 of over 2000