April 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates MIRI researcher Abram Demski writes regarding counterfactuals: I've felt like the problem of counterfactuals is "mostly settled" (modulo some math working out) for about a year, but I don't think I've really communicated this online. Partly, I've been waiting to write up more formal results. But other research has taken up most of my… Read more »

March 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates MIRI's Eliezer Yudkowsky and Evan Hubinger comment in some detail on Ajeya Cotra's The Case for Aligning Narrowly Superhuman Models. This conversation touches on some of the more important alignment research views at MIRI, such as the view that alignment requires a thorough understanding of AGI systems' reasoning "under the hood", and the view that early AGI systems should most likely avoid human… Read more »

February 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates Abram Demski distinguishes different versions of the problem of “pointing at” human values in AI alignment. Evan Hubinger discusses “Risks from Learned Optimization” on the AI X-Risk Research Podcast. Eliezer Yudkowsky comments on AI safety via debate and Goodhart’s law. MIRI supporters donated ~$135k on Giving Tuesday, of which ~26% was matched by Facebook and ~28% by employers… Read more »

January 2021 Newsletter

Posted by & filed under Newsletters.

MIRI updates MIRI’s Evan Hubinger uses a notion of optimization power to define whether AI systems are compatible with the strategy-stealing assumption. MIRI’s Abram Demski discusses debate approaches to AI safety that don’t rely on factored cognition. Evan argues that the first AGI systems are likely to be very similar to each other, and discusses… Read more »

December 2020 Newsletter

Posted by & filed under Newsletters.

MIRI COO Malo Bourgon reviews our past year and discusses our future plans in 2020 Updates and Strategy. Our biggest update is that we've made less concrete progress than we expected on the new research we described in 2018 Update: Our New Research Directions. As a consequence, we're scaling back our work on these research… Read more »

November 2020 Newsletter

Posted by & filed under Newsletters.

MIRI researcher Scott Garrabrant has completed his Cartesian Frames sequence. Scott also covers the first two posts' contents in video form. Other MIRI updates Contrary to my previous announcement, MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. To donate and get information on tax-advantaged… Read more »

October 2020 Newsletter

Posted by & filed under Newsletters.

Starting today, Scott Garrabrant has begun posting Cartesian Frames, a sequence introducing a new conceptual framework Scott has found valuable for thinking about agency. In Scott's words: Cartesian Frames are “applying reasoning like Pearl's to objects like game theory's, with a motivation like Hutter's”. Scott will be giving an online talk introducing Cartesian frames this… Read more »

September 2020 Newsletter

Posted by & filed under Newsletters.

Abram Demski and Scott Garrabrant have made a major update to "Embedded Agency", with new discussions of ε-exploration, Newcomblike problems, reflective oracles, logical uncertainty, Goodhart's law, and predicting rare catastrophes, among other topics. Abram has also written an overview of what good reasoning looks in the absence of Bayesian updating: Radical Probabilism. One recurring theme: [I]n general… Read more »