MIRI Updates

April 2019 Newsletter

Updates New research posts: Simplified Preferences Needed, Simplified Preferences Sufficient; Smoothmin and Personal Identity; Example Population Ethics: Ordered Discounted Utility; A Theory of Human Values; A Concrete Proposal for Adversarial IDA MIRI has received a set of new grants from the Open Philanthropy Project and the Berkeley...

New grants from the Open Philanthropy Project and BERI

I’m happy to announce that MIRI has received two major new grants: A two-year grant totaling $2,112,500 from the Open Philanthropy Project. A $600,000 grant from the Berkeley Existential Risk Initiative. The Open Philanthropy Project’s grant was awarded as part...

March 2019 Newsletter

Want to be in the reference class “people who solve the AI alignment problem”? We now have a guide on how to get started, based on our experience of what tends to make research groups successful. (Also on the AI...

Applications are open for the MIRI Summer Fellows Program!

CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019. MSFP is an extended retreat for mathematicians and programmers with a serious interest in...

A new field guide for MIRIx

We’ve just released a field guide for MIRIx groups, and for other people who want to get involved in AI alignment research. MIRIx is a program where MIRI helps cover basic expenses for outside groups that want to work on...

February 2019 Newsletter

Updates Ramana Kumar and Scott Garrabrant argue that the AGI safety community should begin prioritizing “approaches that work well in the absence of human models”: [T]o the extent that human modelling is a good idea, it is important to do...

Browse
Browse
Subscribe
Follow us on