Blog

Author: Eliezer Yudkowsky

Suppose that frontier AI development is centralized to a single project under tight international controls, with all other development banned internationally. By far the likeliest outcome of this is that we all die. A centralized group of international researchers —...

Crossposted from Twitter with Eliezer’s permission i. A common claim among e/accs is that, since the solar system is big, Earth will be left alone by superintelligences. A simple rejoinder is that just because Bernard Arnault has $170 billion, does...

(Published in TIME on March 29.)   An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better...

Preamble: (If you’re already familiar with all basics and don’t want any preamble, skip ahead to Section B for technical difficulties of alignment proper.) I have several times failed to write up a well-organized list of reasons why AGI will...

Editor’s note: The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017....

– 1988 – Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it “Artificial General Intelligence” because this term will not be coined for another...