Grants and fundraisers

 |   |  News

Two big announcements today:

1. MIRI has won $299,310 from the Future of Life Institute’s grant program to jumpstart the field of long-term AI safety research.

  • $250,000 will go to our research program over the course of three years. This will go towards running workshops and funding a few person-years of research on the open problems discussed in our technical agenda.
  • $49,310 will go towards AI Impacts, a project which aims to shed light on the implications of advanced artificial intelligence using empirical data and rigorous analysis.

MIRI will also collaborate with the primary investigators on two other large FLI grants:

  • $227,212 has been awarded to Owain Evans at the Future of Humanity Institute to develop algorithms that learn human preferences from data despite human irrationalities. This will be carried out in collaboration with Jessica Taylor, who will become a MIRI research fellow at the end of this summer.
  • $36,750 has been awarded to Ramana Kumar at Cambridge University to study self-reference in the HOL theorem prover. This will be done in collaboration with MIRI research fellow Benja Fallenstein.

The money comes from Elon Musk’s extraordinary donation of $10M to fund FLI’s first-of-its-kind grant competition for research aimed at keeping AI technologies beneficial as capabilities improve.

This funding, coming on the heels of the payments from our sale of the Singularity Summit (which recently concluded) and an extremely generous surprise donation from Jed McCaleb at the end of 2013, means we can continue to ramp up our research efforts. That doesn’t mean our job is done, of course. In January, shortly after the FLI conference, we came to the conclusion that the funding situation for our field was set to improve, and decided to start gearing up for growth. That prediction has turned out to be correct, which puts us in an excellent position.

We’re now, indeed, set to grow—the only question is, “How quickly?” Which brings me to announcement number two.

2. Our summer fundraiser is starting in mid-July, and we’re going to try something new.

Every summer for the past few years, MIRI has run a matching fundraiser, where we get some of our biggest donors to pledge their donations conditional upon your support. Conventional wisdom states that matching fundraisers make it easier to raise funds, and MIRI has had a lot of success with them in the past. They seem to be an excellent way to get donors excited, and the deadline helps create a sense of urgency.

However, a few different people, including the folks over at GiveWell and effective altruism writer Ben Kuhn, have voiced skepticism about the effectiveness of matching fundraisers. Most of our large donors are happy to donate regardless of whether we raise matching funds, and matching fundraisers tend to put the focus on interactions between small and large donors, rather than on the exciting projects that we could be running with sufficient funding.

Our experience with our donors has been that they are exceptionally thoughtful, and that they have thought themselves about how (and how quickly) they want MIRI to grow. So this fundraiser we’d like to give you more resources to make an informed decision about where to send your money, with better knowledge about how different levels of funding will affect our operations.

Details are forthcoming mid-July, along with a whole lot more information about what we’ve been up to and what we have planned.

As always, thanks for everything: it’s exciting to receive one of the very first grants in this burgeoning field, and we haven’t forgotten that it’s only thanks to your support that the field has made it this far in the first place.

  • Ryan Carey

    These funds and collaborations seem like ideal news to show that the leadership transition has gone smoothly 🙂

  • Kieran Brown

    Oosh. While every person who uses a computer ‘should’ be actively involved in this research, it just goes to show that there are people who think irrationality should not be part of a process, when that is how rational functions are derived. Some people more than others fail to understand people…

  • Mindey

    “funding a few person-years of research” — why can’t a person just give their person-years for a purpose they support? Why people don’t seem to own their time?