Discussion with Eliezer Yudkowsky on AGI interventions

Posted by & filed under Analysis, Conversations, MIRI Strategy.

  The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as “Anonymous”. I think this Nate Soares quote (excerpted from Nate’s response to a report by Joe Carlsmith) is… Read more »

November 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. Visit intelligence.org/donate to donate and to get information on tax-advantaged donations, employer matching, etc. Giving Tuesday takes place on Nov. 30 at 5:00:00am PT.  Facebook will 100%-match the first $2M donated — something that took less than… Read more »

October 2021 Newsletter

Posted by & filed under Newsletters.

Redwood Research is a new alignment research organization that just launched their website and released an explainer about what they're currently working on. We're quite excited about Redwood's work, and encourage our supporters to consider applying to work there to help boost Redwood's alignment research. MIRI senior researcher Eliezer Yudkowsky writes: Redwood Research is investigating a toy problem in… Read more »

September 2021 Newsletter

Posted by & filed under Newsletters.

Scott Garrabrant has concluded the main section of his Finite Factored Sets sequence (“Details and Proofs”) with posts on inferring time and applications, future work, and speculation. Scott’s new frameworks are also now available as a pair of arXiv papers: “Cartesian Frames” (adapted from the Cartesian Frames sequence for a philosopher audience by Daniel Hermann and Josiah Lopez-Wild) and… Read more »

August 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates Scott Garrabrant and Rohin Shah debate one of the central questions in AI alignment strategy: whether we should try to avoid human-modeling capabilities in the first AGI systems. Scott gives a proof of the fundamental theorem of finite factored sets. News and links Redwood Research, a new AI alignment research organization, is seeking an operations lead. Led… Read more »

July 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates MIRI researcher Evan Hubinger discusses learned optimization, interpretability, and homogeneity in takeoff speeds on the Inside View podcast. Scott Garrabrant releases part three of "Finite Factored Sets", on conditional orthogonality. UC Berkeley's Daniel Filan provides examples of conditional orthogonality in finite factored sets: 1, 2. Abram Demski proposes factoring the alignment problem into "outer alignment"… Read more »

June 2021 Newsletter

Posted by & filed under Newsletters.

Our big news this month is Scott Garrabrant's finite factored sets, one of MIRI's largest results to date. For most people, the best introductory resource on FFS is likely Scott’s Topos talk/transcript. Scott is also in the process of posting a longer, more mathematically dense introduction in multiple parts: part 1, part 2. Scott has also discussed… Read more »

May 2021 Newsletter

Posted by & filed under Newsletters.

MIRI senior researcher Scott Garrabrant has a major new result, “Finite Factored Sets,” that he’ll be unveiling in an online talk this Sunday at noon Pacific time. (Zoom link.) For context on the result, see Scott’s new post “Saving Time.” In other big news, MIRI has just received its two largest individual donations of all… Read more »