Blog

Author: Rob Bensinger

Yudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies”

A huge announcement today: Eliezer Yudkowsky and Nate Soares have written a book attempting to raise the alarm about superintelligent AI for the widest possible audience — If Anyone Builds It, Everyone Dies. The book comes out this September 16,...

The basic reasons I expect AGI ruin

I’ve been citing AGI Ruin: A List of Lethalities to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over “the basics”. Here are 10 things...

Yudkowsky on AGI risk on the Bankless podcast

Eliezer gave a very frank overview of his take on AI two weeks ago on the cryptocurrency show Bankless:  I’ve posted a transcript of the show and a follow-up Q&A below. Thanks to Andrea_Miotti, remember, and vonk for help posting...

July 2022 Newsletter

MIRI has put out three major new posts: AGI Ruin: A List of Lethalities. Eliezer Yudkowsky lists reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community—MIRI included—isn't succeeding at preventing this from...

Shah and Yudkowsky on alignment failures

  This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn. The discussion begins with summaries and comments...

Ngo and Yudkowsky on scientific reasoning and pivotal acts

This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo’s view on alignment difficulty.  ...