Responses to Catastrophic AGI Risk: A Survey

 |   |  Papers

MIRI is self-publishing another technical report that was too lengthy (60 pages) for publication in a journal: Responses to Catastrophic AGI Risk: A Survey.

The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville’s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.

Here is the abstract:

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.

The preferred discussion page for the paper is here.

Update: This report has now been published in Physica Scripta, available here.