AGI Ruin: A List of Lethalities

Posted by & filed under Analysis.

Preamble: (If you’re already familiar with all basics and don’t want any preamble, skip ahead to Section B for technical difficulties of alignment proper.) I have several times failed to write up a well-organized list of reasons why AGI will kill you.  People come in with different ideas about why AGI would be survivable, and want to… Read more »

Six Dimensions of Operational Adequacy in AGI Projects

Posted by & filed under Analysis.

Editor’s note:  The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017. A background note: It’s often the case that people are slow to abandon obsolete playbooks in… Read more »

Biology-Inspired AGI Timelines: The Trick That Never Works

Posted by & filed under Analysis.

– 1988 – Hans Moravec:  Behold my book Mind Children.  Within, I project that, in 2010 or thereabouts, we shall achieve strong AI.  I am not calling it “Artificial General Intelligence” because this term will not be coined for another 15 years or so. Eliezer (who is not actually on the record as saying this, because the real Eliezer… Read more »

The Rocket Alignment Problem

Posted by & filed under Analysis.

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (Somewhere in a not-very-near neighboring world, where science took a very different course…)   ALFONSO:  Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent… Read more »

A reply to Francois Chollet on intelligence explosion

Posted by & filed under Analysis.

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.” In response to critics of his essay, Chollet tweeted:   If you post an argument online, and the only opposition you get is braindead arguments and… Read more »

Security Mindset and the Logistic Success Curve

Posted by & filed under Analysis.

Follow-up to:   Security Mindset and Ordinary Paranoia   (Two days later, Amber returns with another question.)   AMBER:  Uh, say, Coral. How important is security mindset when you’re building a whole new kind of system—say, one subject to potentially adverse optimization pressures, where you want it to have some sort of robustness property? CORAL:  How novel is the… Read more »