Responses to catastrophic AGI risk: a survey

Kaj Sotala,Roman V Yampolskiy

Published 2014 in Physica Scripta

ABSTRACT

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.

PUBLICATION RECORD

  • Publication year

    2014

  • Venue

    Physica Scripta

  • Publication date

    Unknown publication date

  • Fields of study

    Physics, Computer Science, Political Science, Philosophy, Psychology

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-100 of 285 references · Page 1 of 3

CITED BY

Showing 1-100 of 127 citing papers · Page 1 of 2