Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
Responses to catastrophic AGI risk: a survey
Published 2014 in Physica Scripta
ABSTRACT
PUBLICATION RECORD
- Publication year
2014
- Venue
Physica Scripta
- Publication date
Unknown publication date
- Fields of study
Physics, Computer Science, Political Science, Philosophy, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.