To read the full version of this content please select one of the options below:

The intelligence explosion revisited

Karim Jebari (Institute for Futures Studies, Stockholm, Sweden)
Joakim Lundborg (Wrapp, Stockholm, Sweden)

Foresight

ISSN: 1463-6689

Article publication date: 8 October 2018

Issue publication date: 11 March 2019

Downloads
400

Abstract

Purpose

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.

Design/methodology/approach

This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.

Findings

Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.

Originality/value

If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.

Keywords

Citation

Jebari, K. and Lundborg, J. (2019), "The intelligence explosion revisited", Foresight, Vol. 21 No. 1, pp. 167-174. https://doi.org/10.1108/FS-04-2018-0042

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Emerald Publishing Limited