Blog

Category: Conversations

Ngo and Yudkowsky on AI capability gains

  This is the second post in a series of transcribed conversations about AGI forecasting and alignment. See the first post for prefaces and more information about the format.

...

Ngo and Yudkowsky on alignment difficulty

  This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We’ve also added Richard and Nate’s running summaries of the conversation (and others’ replies) from Google Docs....

Discussion with Eliezer Yudkowsky on AGI interventions

  The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as “Anonymous”. I think...

Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”

MIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris’ “Waking Up” podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse. The following is a...

Decisions are for making bad outcomes inconsistent

Nate Soares’ recent decision theory paper with Ben Levinstein, “Cheating Death in Damascus,” prompted some valuable questions and comments from an acquaintance (anonymized here). I’ve put together edited excerpts from the commenter’s email below, with Nate’s responses. The discussion concerns...

John Horgan interviews Eliezer Yudkowsky

Scientific American writer John Horgan recently interviewed MIRI’s senior researcher and co-founder, Eliezer Yudkowsky. The email interview touched on a wide range of topics, from politics and religion to existential risk and Bayesian models of rationality. Although Eliezer isn’t speaking...