MIRI Updates

Want to be in the reference class “people who solve the AI alignment problem”? We now have a guide on how to get started, based on our experience of what tends to make research groups successful. (Also on the AI...

CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019. MSFP is an extended retreat for mathematicians and programmers with a serious interest in...

We’ve just released a field guide for MIRIx groups, and for other people who want to get involved in AI alignment research. MIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on...

Updates Ramana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing “approaches that work well in the absence of human models”: [T]o the extent that human modelling is a good idea, it is important to do...

This is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the AI Alignment Forum and LessWrong. Human values and preferences are hard to specify, especially in complex...

Our 2018 Fundraiser ended on December 31 with the five week campaign raising $951,8171 from 348 donors to help advance MIRI’s mission. We surpassed our Mainline Target ($500k) and made it more than halfway again to our Accelerated Growth Target...

Browse
Browse
Subscribe
Follow us on