Blog

Category: Conversations

Yudkowsky on AGI risk on the Bankless podcast

Eliezer gave a very frank overview of his take on AI two weeks ago on the cryptocurrency show Bankless:  I’ve posted a transcript of the show and a follow-up Q&A below. Thanks to Andrea_Miotti, remember, and vonk for help posting...

Comments on OpenAI’s "Planning for AGI and beyond"

Sam Altman shared me on a draft of his OpenAI blog post Planning for AGI and beyond, and I left some comments, reproduced below without typos and with some added hyperlinks. Where the final version of the OpenAI post differs...

Shah and Yudkowsky on alignment failures

  This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn. The discussion begins with summaries and comments...

Ngo and Yudkowsky on scientific reasoning and pivotal acts

This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo’s view on alignment difficulty.  ...

Christiano and Yudkowsky on AI predictions and human intelligence

  This is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the Late 2021 MIRI Conversations. Color key:  Chat by Paul and Eliezer   Other...

Ngo’s view on alignment difficulty

  This post features a write-up by Richard Ngo on his views, with inline comments.   Color key:   Chat     Google Doc content     Inline comments     13. Follow-ups to the Ngo/Yudkowsky conversation   13.1. Alignment...