Search results

1 – 10 of over 3000
Expert briefing
Publication date: 7 July 2023

A particularly sensitive strand of this debate focuses on ‘existentialrisks. This concern was voiced in a terse but influential recent statement by Center for AI Safety (CAIS)…

Details

DOI: 10.1108/OXAN-DB280345

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 30 October 2018

Phil Torres

This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks – the “Great Challenges” – that…

Abstract

Purpose

This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks – the “Great Challenges” – that deserve our immediate attention, namely, environmental degradation, which includes climate change and global biodiversity loss; the distribution of unprecedented destructive capabilities across society by dual-use emerging technologies; and value-misaligned algorithms that exceed human-level intelligence in every cognitive domain. After examining each of these challenges, the paper then outlines a handful of additional issues that are relevant to understanding our existential predicament and could complicate attempts to overcome the Great Challenges. The central aim of this paper is to constitute an authoritative resource, insofar as this is possible in a scholarly journal, for scholars who are working on or interested in existential risks. In the author’s view, this is precisely the sort of big-picture analysis that humanity needs more of, if we wish to navigate the obstacle course of existential dangers before us.

Design/methodology/approach

Comprehensive literature survey that culminates in a novel theoretical framework for thinking about global-scale risks.

Findings

If humanity wishes to survive and prosper in the coming centuries, then we must overcome three Great Challenges, each of which is sufficient to cause a significant loss of expected value in the future.

Originality/value

The Great Challenges framework offers a novel scheme that highlights the most pressing global-scale risks to human survival and prosperity. The author argues that the “big-picture” approach of this paper exemplifies the sort of scholarship that humanity needs more of to properly understand the various existential hazards that are unique to the twenty-first century.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 29 November 2018

Karin Kuhlemann

This paper aims to consider few cognitive and conceptual obstacles to engagement with global catastrophic risks (GCRs).

Abstract

Purpose

This paper aims to consider few cognitive and conceptual obstacles to engagement with global catastrophic risks (GCRs).

Design/methodology/approach

The paper starts by considering cognitive biases that affect general thinking about GCRs, before questioning whether existential risks really are dramatically more pressing than other GCRs. It then sets out a novel typology of GCRs – sexy vs unsexy risks – before considering a particularly unsexy risk, overpopulation.

Findings

It is proposed that many risks commonly regarded as existential are “sexy” risks, while certain other GCRs are comparatively “unsexy.” In addition, it is suggested that a combination of complexity, cognitive biases and a hubris-laden failure of imagination leads us to neglect the most unsexy and pervasive of all GCRs: human overpopulation. The paper concludes with a tentative conceptualisation of overpopulation as a pattern of risking.

Originality/value

The paper proposes and conceptualises two new concepts, sexy and unsexy catastrophic risks, as well as a new conceptualisation of overpopulation as a pattern of risking.

Article
Publication date: 11 October 2018

James Daniel Miller

The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is…

254

Abstract

Purpose

The great filter and an unfriendly artificial general intelligence might pose existential risks to humanity, but these two risks are anti-correlated. The purpose of this paper is to consider the implications of having evidence that mankind is at significant peril from both these risks.

Design/methodology/approach

This paper creates Bayesian models under which one might get evidence for being at risk for two perils when we know that we are at risk for at most one of these perils.

Findings

Humanity should possibly be more optimistic about its long-term survival if we have convincing evidence for believing that both these risks are real than if we have such evidence for thinking that only one of these perils would likely strike us.

Originality/value

Deriving implications of being greatly concerned about both an unfriendly artificial general intelligence and the great filter.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Content available
Article
Publication date: 9 April 2019

Olle Häggström and Catherine Rhodes

1014

Abstract

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Book part
Publication date: 6 September 2021

Christian Fuchs

In 2020, the coronavirus crisis ruptured societies and their everyday life around the globe. This chapter is a contribution to critically theorising the changes societies have…

Abstract

In 2020, the coronavirus crisis ruptured societies and their everyday life around the globe. This chapter is a contribution to critically theorising the changes societies have undergone in the light of the coronavirus crisis. It asks: How have everyday life and everyday communication changed in the coronavirus crisis? How does capitalism shape everyday life and everyday communication during this crisis?

This chapter focuses on how social space, everyday life and everyday communication have changed in the coronavirus crisis.

The coronavirus crisis is an existential crisis of humanity and society. It radically confronts humans with death and the fear of death. This collective experience can on the one hand result in new forms of solidarity and socialism or can on the other hand, if ideology and the far-right prevail, advance war and fascism. Political action and political economy are decisive factors in such a profound crisis that shatters society and everyday life.

Book part
Publication date: 13 September 2023

Lukman Raimi and Fatima Mayowa Lukman

Beyond the rhetoric of Nigeria's policymakers, there are multifaceted challenges threatening sustainable development (SD) in Nigeria under climate change (CC). To strengthen…

Abstract

Beyond the rhetoric of Nigeria's policymakers, there are multifaceted challenges threatening sustainable development (SD) in Nigeria under climate change (CC). To strengthen theory and practice, this chapter discusses SD under CC in Nigeria using SWOT analysis. The exploratory focus of this chapter made the qualitative research method, an interpretivist research paradigm, most appropriate. Data sourced from scholarly articles and other secondary resources were reviewed, integrated and synthesised using SWOT analysis. At the end of the SWOT analysis, four insights emerged. The strengths and opportunities of SD under CC include increased awareness and growing access to climate-friendly technologies, sustainable finance, climate-friendly agriculture, solar technologies and renewable energy solutions, among others. The weaknesses and threats include deforestation, unabated gas flaring, rising carbon emissions and exorbitant cost of climate-friendly technologies, among others. The chapter explicates the need for policymakers and regulatory agencies in Nigeria to consolidate the strengths, correct all weaknesses, harness opportunities and avert the looming threats of CC. The chapter contributes to the three themes of SD by affirming that CC comes with devastating consequences that evidently pose existential risks and threats to people, profits and the planet. Consequently, policymakers need to mobilise sufficient resources and capabilities for CC adaptation and mitigation to achieve SD in Nigeria.

Expert briefing
Publication date: 30 April 2021

Plaintiffs range from users, customers, app developers, investors, competitors, employees (current and former), law enforcement and tax agencies. They are seeking redress for…

Details

DOI: 10.1108/OXAN-DB261220

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 8 October 2018

Karim Jebari and Joakim Lundborg

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the…

504

Abstract

Purpose

The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.

Design/methodology/approach

This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.

Findings

Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.

Originality/value

If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 5 September 2018

Alexey Turchin and Brian Patrick Green

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as…

Abstract

Purpose

Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases and other risks. This paper aims to explore how to use the advantages of islands for survival during global catastrophes.

Design/methodology/approach

Preliminary horizon scanning based on the application of the research principles established in the previous global catastrophic literature.

Findings

The large number of islands on Earth, and their diverse conditions, increase the chance that one of them will provide protection from a catastrophe. Additionally, this protection could be increased if an island was used as a base for a nuclear submarine refuge combined with underground bunkers and/or extremely long-term data storage. The requirements for survival on islands, their vulnerabilities and ways to mitigate and adapt to risks are explored. Several existing islands, suitable for the survival of different types of risk, timing and budgets, are examined. Islands suitable for different types of refuges and other island-like options that could also provide protection are also discussed.

Originality/value

The possible use of islands as refuges from social collapse and existential risks has not been previously examined systematically. This paper contributes to the expanding research on survival scenarios.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

1 – 10 of over 3000