Search results

1 – 3 of 3
Article
Publication date: 27 November 2018

Roman V. Yampolskiy

The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of…

2561

Abstract

Purpose

The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100 per cent secure system.

Design/methodology/approach

AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AI Safety, failures are at the same, moderate level of criticality as in cybersecurity; however, for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery.

Findings

In this paper, the authors present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. The authors suggest that both the frequency and the seriousness of future AI failures will steadily increase.

Originality/value

This is a first attempt to assemble a public data set of AI failures and is extremely valuable to AI Safety researchers.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 11 February 2019

Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin and Roman V. Yampolskiy

This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined…

5078

Abstract

Purpose

This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist.

Design/methodology/approach

This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos.

Findings

Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible.

Originality/value

Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue.

Details

foresight, vol. 21 no. 1
Type: Research Article
ISSN: 1463-6689

Keywords

Article
Publication date: 13 June 2023

Cristian Morosan and Aslıhan Dursun-Cengizci

This study aims to examine hotel guests’ acceptance of technology agency – the extent to which they would let artificial intelligence (AI)-based systems make decisions for them…

1084

Abstract

Purpose

This study aims to examine hotel guests’ acceptance of technology agency – the extent to which they would let artificial intelligence (AI)-based systems make decisions for them when staying in hotels. The examination was conducted through the prism of several antecedents of acceptance of technology agency, including perceived ethics, benefits, risks and convenience orientation.

Design/methodology/approach

A thorough literature review provided the foundation of the structural model, which was tested using confirmatory factor analysis, followed by structural equation modeling. Data were collected from 400 US hotel guests.

Findings

The most important determinant of acceptance of technology agency was perceived ethics, followed by benefits. Risks of using AI-based systems to make decisions for consumers had a negative impact on acceptance of technology agency. In addition, perceived loss of competence and unpredictability had relatively strong impacts on risks.

Research limitations/implications

The results provide a conceptual foundation for research on systems that make decisions for consumers. As AI is increasingly incorporated in the business models of hotel companies to make decisions, ensuring that the decisions are perceived as ethical and beneficial for consumers is critical to increase the utilization of such systems.

Originality/value

Most research on AI in hospitality is either conceptual or focuses on consumers’ intentions to stay in hotels that may be equipped with AI technologies. Occupying a unique position within the literature, this study discusses the first time AI-based systems that make decisions for consumers. The value of this study stems from the examination of the main concept of technology agency, which was never examined in hospitality.

Details

International Journal of Contemporary Hospitality Management, vol. 36 no. 3
Type: Research Article
ISSN: 0959-6119

Keywords

1 – 3 of 3