Search results

1 – 4 of 4
Article
Publication date: 8 October 2018

Karim Jebari and Joakim Lundborg

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the…

504

Abstract

Purpose

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.

Design/methodology/approach

This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.

Findings

Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.

Originality/value

If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 5 September 2018

Alexey Turchin and Brian Patrick Green

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as…

Abstract

Purpose

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases and other risks. This paper aims to explore how to use the advantages of islands for survival during global catastrophes.

Design/methodology/approach

Preliminary horizon scanning based on the application of the research principles established in the previous global catastrophic literature.

Findings

The large number of islands on Earth, and their diverse conditions, increase the chance that one of them will provide protection from a catastrophe. Additionally, this protection could be increased if an island was used as a base for a nuclear submarine refuge combined with underground bunkers and/or extremely long-term data storage. The requirements for survival on islands, their vulnerabilities and ways to mitigate and adapt to risks are explored. Several existing islands, suitable for the survival of different types of risk, timing and budgets, are examined. Islands suitable for different types of refuges and other island-like options that could also provide protection are also discussed.

Originality/value

The possible use of islands as refuges from social collapse and existential risks has not been previously examined systematically. This paper contributes to the expanding research on survival scenarios.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Content available
Article
Publication date: 9 April 2019

Olle Häggström and Catherine Rhodes

1016

Abstract

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Article
Publication date: 25 October 2018

Olle Häggström

This paper aims to contribute to the futurology of a possible artificial intelligence (AI) breakthrough, by reexamining the Omohundro–Bostrom theory for instrumental vs final AI…

Abstract

Purpose

This paper aims to contribute to the futurology of a possible artificial intelligence (AI) breakthrough, by reexamining the Omohundro–Bostrom theory for instrumental vs final AI goals. Does that theory, along with its predictions for what a superintelligent AI would be motivated to do, hold water?

Design/methodology/approach

The standard tools of systematic reasoning and analytic philosophy are used to probe possible weaknesses of Omohundro–Bostrom theory from four different directions: self-referential contradictions, Tegmark’s physics challenge, moral realism and the messy case of human motivations.

Findings

The two cornerstones of Omohundro–Bostrom theory – the orthogonality thesis and the instrumental convergence thesis – are both open to various criticisms that question their validity and scope. These criticisms are however far from conclusive: while they do suggest that a reasonable amount of caution and epistemic humility is attached to predictions derived from the theory, further work will be needed to clarify its scope and to put it on more rigorous foundations.

Originality/value

The practical value of being able to predict AI goals and motivations under various circumstances cannot be overstated: the future of humanity may depend on it. Currently, the only framework available for making such predictions is Omohundro–Bostrom theory, and the value of the present paper is to demonstrate its tentative nature and the need for further scrutiny.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Access

Year

Content type

Article (4)
1 – 4 of 4