The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks.
This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils.
Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us.
Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.
The author is grateful to the Gothenburg Chair Programme for Advanced Studies for holding a series of workshops during which the idea behind this paper was presented, to Stuart Armstrong forproviding detailed comments on an earlier draft of this paper and to two anonymous referees and an editor for providing further feedback. All errors are the authors’ own.
Emerald Publishing Limited
Copyright © 2018, Emerald Publishing Limited