MIRI has a long history of thinking about how to make the arrival of smarter-than-human intelligence go well. With trends in this field increasingly pointing to disaster, in early 2024 MIRI announced a strategy shift: We would communicate our understanding...
As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy. The Objective: Shut it...
As we announced back in October, I have taken on the senior leadership role at MIRI as its CEO. It’s a big pair of shoes to fill, and an awesome responsibility that I’m honored to take on. There have been...
Today, December 6th, 2023, I participated in the U.S. Senate’s eighth bipartisan AI Insight Forum, which focused on the topic of “Risk, Alignment, & Guarding Against Doomsday Scenarios.” I’d like to thank Leader Schumer, and Senators Rounds, Heinrich, and Young,...
The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as “Anonymous”. I think...
MIRI’s 2020 has been a year of experimentation and adjustment. In response to the COVID-19 pandemic, we largely moved our operations to more rural areas in March, and shifted to a greater emphasis on remote work. We took the opportunity...