MIRI Updates
A new field guide for MIRIx
We’ve just released a field guide for MIRIx groups, and for other people who want to get involved in AI alignment research. MIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on...
February 2019 Newsletter
Updates Ramana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing “approaches that work well in the absence of human models”: [T]o the extent that human modelling is a good idea, it is important to do...
Thoughts on Human Models
This is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the AI Alignment Forum and LessWrong. Human values and preferences are hard to specify, especially in complex...
Our 2018 Fundraiser Review
Our 2018 Fundraiser ended on December 31 with the five week campaign raising $951,8171 from 348 donors to help advance MIRI’s mission. We surpassed our Mainline Target ($500k) and made it more than halfway again to our Accelerated Growth Target...
January 2019 Newsletter
Our December fundraiser was a success, with 348 donors contributing just over $950,000. Supporters leveraged a variety of matching opportunities, including employer matching programs, WeTrust Spring’s Ethereum-matching campaign, Facebook’s Giving Tuesday event, and professional poker players Dan Smith, Aaron Merchak,...
December 2018 Newsletter
Edward Kmett has joined the MIRI team! Edward is a prominent Haskell developer who popularized the use of lenses for functional programming, and currently maintains many of the libraries around the Haskell core libraries. I’m also happy to announce another...