(Published in TIME on March 29.) An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement… Read more »
Posts By: Eliezer Yudkowsky
AGI Ruin: A List of Lethalities
Preamble: (If you’re already familiar with all basics and don’t want any preamble, skip ahead to Section B for technical difficulties of alignment proper.) I have several times failed to write up a well-organized list of reasons why AGI will kill you. People come in with different ideas about why AGI would be survivable, and want to… Read more »
Six Dimensions of Operational Adequacy in AGI Projects
Editor’s note: The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017. A background note: It’s often the case that people are slow to abandon obsolete playbooks in… Read more »
Biology-Inspired AGI Timelines: The Trick That Never Works
– 1988 – Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it “Artificial General Intelligence” because this term will not be coined for another 15 years or so. Eliezer (who is not actually on the record as saying this, because the real Eliezer… Read more »
The Rocket Alignment Problem
The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start. (Somewhere in a not-very-near neighboring world, where science took a very different course…) ALFONSO: Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent… Read more »
Challenges to Christiano’s capability amplification proposal
The following is a basically unedited summary I wrote up on March 16 of my take on Paul Christiano’s AGI alignment approach (described in “ALBA” and “Iterated Distillation and Amplification”). Where Paul had comments and replies, I’ve included them below. I see a lot of free variables with respect to what exactly Paul might have… Read more »
A reply to Francois Chollet on intelligence explosion
This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.” In response to critics of his essay, Chollet tweeted: If you post an argument online, and the only opposition you get is braindead arguments and… Read more »
Security Mindset and the Logistic Success Curve
Follow-up to: Security Mindset and Ordinary Paranoia (Two days later, Amber returns with another question.) AMBER: Uh, say, Coral. How important is security mindset when you’re building a whole new kind of system—say, one subject to potentially adverse optimization pressures, where you want it to have some sort of robustness property? CORAL: How novel is the… Read more »