To read this content please select one of the options below:

When two existential risks are better than one

James Daniel Miller (Smith College, South Deerfield, Massachusetts, USA)

Foresight

ISSN: 1463-6689

Article publication date: 11 October 2018

Issue publication date: 11 March 2019

249

Abstract

Purpose

The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks.

Design/methodology/approach

This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils.

Findings

Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us.

Originality/value

Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.

Keywords

Acknowledgements

The author is grateful to the Gothenburg Chair Programme for Advanced Studies for holding a series of workshops during which the idea behind this paper was presented, to Stuart Armstrong forproviding detailed comments on an earlier draft of this paper and to two anonymous referees and an editor for providing further feedback. All errors are the authors’ own.

Citation

Miller, J.D. (2019), "When two existential risks are better than one", Foresight, Vol. 21 No. 1, pp. 130-137. https://doi.org/10.1108/FS-04-2018-0038

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited

Related articles