Blog

Author: Eliezer Yudkowsky

The Sun is big, but superintelligences will not spare Earth a little sunlight

Crossposted from Twitter with Eliezer’s permission i. A common claim among e/accs is that, since the solar system is big, Earth will be left alone by superintelligences. A simple rejoinder is that just because Bernard Arnault has $170 billion, does...

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

(Published in TIME on March 29.)   An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better...

AGI Ruin: A List of Lethalities

Preamble: (If you’re already familiar with all basics and don’t want any preamble, skip ahead to Section B for technical difficulties of alignment proper.) I have several times failed to write up a well-organized list of reasons why AGI will...

Six Dimensions of Operational Adequacy in AGI Projects

Editor’s note: The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017....

Biology-Inspired AGI Timelines: The Trick That Never Works

– 1988 – Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it “Artificial General Intelligence” because this term will not be coined for another...

The Rocket Alignment Problem

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (Somewhere in a not-very-near neighboring world, where science took a very different course…)   ALFONSO:  Hello, Beth. I’ve noticed a lot of...