From now through the end of December, MIRI's 2019 Fundraiser is live! See our fundraiser post for updates on our past year and future plans. One of our biggest updates, I'm happy to announce, is that we've hired five new...
I'm happy to announce that Nate Soares and Ben Levinstein's “Cheating Death in Damascus” has been accepted for publication in The Journal of Philosophy (previously voted the second-highest-quality journal in philosophy). In other news, MIRI researcher Buck Shlegeris has written over...
Updates Ben Pace summarizes a second round of AI Alignment Writing Day posts. The Zettelkasten Method: MIRI researcher Abram Demski describes a note-taking system that's had a large positive effect on his research productivity. Will MacAskill writes a detailed critique of functional...
Updates We ran a very successful MIRI Summer Fellows Program, which included a day where participants publicly wrote up their thoughts on various AI safety topics. See Ben Pace’s first post in a series of roundups. A few highlights from the writing...
Updates MIRI research associate Stuart Armstrong is offering $1000 for good questions to ask an Oracle AI. Recent AI safety posts from Stuart: Indifference: Multiple Changes, Multiple Agents; Intertheoretic Utility Comparison: Examples; Normalising Utility as Willingness to Pay; and Partial Preferences...
Hubinger et al.'s “Risks from Learned Optimization in Advanced Machine Learning Systems”, one of our new core resources on the alignment problem, is now available on arXiv, the AI Alignment Forum, and LessWrong. In other news, we received an Ethereum...