MIRI Updates

Our all-time largest donation, and major crypto support from Vitalik Buterin

I’m thrilled to announce two major donations to MIRI!   First, a long-time supporter has given MIRI by far our largest donation ever: $2.5 million per year over the next four years, and an additional ~$5.6 million in 2025. This...

April 2021 Newsletter

MIRI updates MIRI researcher Abram Demski writes regarding counterfactuals: I've felt like the problem of counterfactuals is "mostly settled" (modulo some math working out) for about a year, but I don't think I've really communicated this online. Partly, I've been waiting...

March 2021 Newsletter

MIRI updates MIRI's Eliezer Yudkowsky and Evan Hubinger comment in some detail on Ajeya Cotra's The Case for Aligning Narrowly Superhuman Models. This conversation touches on some of the more important alignment research views at MIRI, such as the view that alignment requires...

February 2021 Newsletter

MIRI updates Abram Demski distinguishes different versions of the problem of “pointing at” human values in AI alignment. Evan Hubinger discusses “Risks from Learned Optimization” on the AI X-Risk Research Podcast. Eliezer Yudkowsky comments on AI safety via debate and...

January 2021 Newsletter

MIRI updates MIRI’s Evan Hubinger uses a notion of optimization power to define whether AI systems are compatible with the strategy-stealing assumption. MIRI’s Abram Demski discusses debate approaches to AI safety that don’t rely on factored cognition. Evan argues that...

December 2020 Newsletter

MIRI COO Malo Bourgon reviews our past year and discusses our future plans in 2020 Updates and Strategy. Our biggest update is that we've made less concrete progress than we expected on the new research we described in 2018 Update:...

Browse
Browse
Subscribe
Follow us on