I’m thrilled to announce that the Open Philanthropy Project has awarded MIRI a three-year $3.75 million general support grant ($1.25 million per year). This grant is, by far, the largest contribution MIRI has received to date, and will have a major effect on our plans going forward.
This grant follows a $500,000 grant we received from the Open Philanthropy Project in 2016. The Open Philanthropy Project’s announcement for the new grant notes that they are “now aiming to support about half of MIRI’s annual budget”.1 The annual $1.25 million represents 50% of a conservative estimate we provided to the Open Philanthropy Project of the amount of funds we expect to be able to usefully spend in 2018–2020.
This expansion in support was also conditional on our ability to raise the other 50% from other supporters. For that reason, I sincerely thank all of the past and current supporters who have helped us get to this point.
The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding.
We’ll be going into more details on our future organizational plans in a follow-up post December 1, where we’ll also discuss our end-of-the-year fundraising goals.
In their write-up, the Open Philanthropy Project notes that they have updated favorably about our technical output since 2016, following our logical induction paper:
We received a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of our close advisors, and (iii) is generally regarded as outstanding by the ML community. As mentioned above, we previously had difficulty evaluating the technical quality of MIRI’s research, and we previously could find no one meeting criteria (i) – (iii) to a comparable extent who was comparably excited about MIRI’s technical research. While we would not generally offer a comparable grant to any lab on the basis of this consideration alone, we consider this a significant update in the context of the original case for the  grant (especially MIRI’s thoughtfulness on this set of issues, value alignment with us, distinctive perspectives, and history of work in this area). While the balance of our technical advisors’ opinions and arguments still leaves us skeptical of the value of MIRI’s research, the case for the statement “MIRI’s research has a nontrivial chance of turning out to be extremely valuable (when taking into account how different it is from other research on AI safety)” appears much more robust than it did before we received this review.
The announcement also states, “In the time since our initial grant to MIRI, we have made several more grants within this focus area, and are therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach.”
We’re enormously grateful for the Open Philanthropy Project’s support, and for their deep engagement with the AI safety field as a whole. To learn more about our discussions with the Open Philanthropy Project and their active work in this space, see the group’s previous AI safety grants, our conversation with Daniel Dewey on the Effective Altruism Forum, and the research problems outlined in the Open Philanthropy Project’s recent AI fellows program description.
- The Open Philanthropy Project usually prefers not to provide more than half of an organization’s funding, to facilitate funder coordination and ensure that organizations it supports maintain their independence. From a March blog post: “We typically avoid situations in which we provide >50% of an organization’s funding, so as to avoid creating a situation in which an organization’s total funding is ‘fragile’ as a result of being overly dependent on us.” ↩