MIRI Updates
September 2019 Newsletter
Updates We ran a very successful MIRI Summer Fellows Program, which included a day where participants publicly wrote up their thoughts on various AI safety topics. See Ben Pace’s first post in a series of roundups. A few highlights from the writing...
August 2019 Newsletter
Updates MIRI research associate Stuart Armstrong is offering $1000 for good questions to ask an Oracle AI. Recent AI safety posts from Stuart: Indifference: Multiple Changes, Multiple Agents; Intertheoretic Utility Comparison: Examples; Normalising Utility as Willingness to Pay; and Partial Preferences...
July 2019 Newsletter
Hubinger et al.'s “Risks from Learned Optimization in Advanced Machine Learning Systems”, one of our new core resources on the alignment problem, is now available on arXiv, the AI Alignment Forum, and LessWrong. In other news, we received an Ethereum...
New paper: “Risks from learned optimization”
Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have a new paper out: “Risks from learned optimization in advanced machine learning systems.” The paper’s abstract: We analyze the type of learned optimization that occurs when a...
June 2019 Newsletter
Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant have released the first two (of five) posts on “mesa-optimization”: The goal of this sequence is to analyze the type of learned optimization that occurs when a learned...
2018 in review
Our primary focus at MIRI in 2018 was twofold: research—as always!—and growth. Thanks to the incredible support we received from donors the previous year, in 2018 we were able to aggressively pursue the plans detailed in our 2017 fundraiser post....