This post reviews MIRI’s activities in 2017, including research, recruiting, exposition, and fundraising activities.
2017 was a big transitional year for MIRI, as we took on new research projects that have a much greater reliance on hands-on programming work and experimentation. We’ve continued these projects in 2018, and they’re described more in our 2018 update. This meant a major focus on laying groundwork for much faster growth than we’ve had in the past, including setting up infrastructure and changing how we recruit to reach out to more people with engineering backgrounds.
At the same time, 2017 was our best year to date on fundraising, as we saw a significant increase in support both from the Open Philanthropy Project and from the cryptocurrency community, which responded to the crypto boom with a great deal of generosity toward us. This put us in an excellent position to move ahead with our plans with confidence, and to focus more of our effort on technical research and growth.
The review this year is coming out far later than usual, for which I apologize. One of the main reasons for this is that I felt that a catalogue of our 2017 activities would be much less informative if I couldn’t cite our 2018 update, which explains a lot of the reasoning behind our new work and how the things we’re doing relate to each other. I apologize for any inconvenience this might have caused people trying to track what MIRI’s been up to. I plan to have our next annual review out much earlier, in the first quarter of 2019.
2017 Research Progress
As described in our 2017 organizational update and elaborated on in much more detail in our recent 2018 update, 2017 saw a significant shift in where we’re putting our research efforts. Although an expanded version of the Agent Foundations agenda continues to be a major focus at MIRI, we’re also now tackling a new set of alignment research directions that lend themselves more to code experiments.
Since early 2017, we’ve been increasingly adopting a policy of not disclosing many of our research results, which has meant that less of our new output is publicly available. Some of our work in 2017 (and 2018) has continued to be made public, however, including research on the AI Alignment Forum.
In 2017, Scott Garrabrant refactored our Agent Foundations agenda into four new categories: decision theory, embedded world-models, robust delegation, and subsystem alignment. Abram Demski and Scott have now co-written an introduction to these four problems, considered as different aspects of the larger problem of “Embedded Agency.”
Comparing our predictions (from March 20171) to our progress over 2017, and using a 1-5 scale where 1 means “limited” progress, 3 means “modest” progress, and 5 means “sizable” progress, we get the following retrospective take on our public-facing research progress:
Decision theory
- 2015 progress: 3. (Predicted: 3.)
- 2016 progress: 3. (Predicted: 3.)
- 2017 progress: 3. (Predicted: 3.)
Our most significant 2017 results include posing and solving a version of the converse Lawvere problem;2 developing cooperative oracles; and improving our understanding of how causal decision theory relates to evidential decision theory (e.g., in Smoking Lesion Steelman).
We also released a number of introductory resources on decision theory, including “Functional Decision Theory” and Decisions Are For Making Bad Outcomes Inconsistent.
Embedded world-models
- 2015 progress: 5. (Predicted: 3.)
- 2016 progress: 5. (Predicted: 3.)
- 2017 progress: 2. (Predicted: 2.)
Key 2017 results in this area include the finding that logical inductors that can see each other dominate each other; and, as a corollary, logical inductor limits dominate each other.
Beyond that, Scott Garrabrant reports that Hyperreal Brouwer shifted his thinking significantly with respect to probabilistic truth predicates, reflective oracles, and logical inductors. Additionally, Vanessa Kosoy’s “Forecasting Using Incomplete Models” built on our previous work on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in “Logical Induction” are useful for applications in classical sequence prediction unrelated to logic.
Robust delegation
- 2015 progress: 3. (Predicted: 3.)
- 2016 progress: 4. (Predicted: 3.)
- 2017 progress: 4. (Predicted: 1.)
We made significant progress on the tiling problem, and also clarified our thinking about Goodhart’s Law (see “Goodhart Taxonomy“). Other noteworthy work in this area includes Vanessa Kosoy’s Delegative Inverse Reinforcement Learning framework, Abram Demski’s articulation of “stable pointers to value” as a central desideratum for value loading, and Ryan Carey’s “Incorrigibility in the CIRL Framework.”
Subsystem alignment (new category)
One of the more significant research shifts at MIRI in 2017 was orienting toward the subsystem alignment problem at all, following discussion such as Eliezer Yudkowsky’s Optimization Daemons, Paul Christiano’s What Does the Universal Prior Actually Look Like?, and Jessica Taylor’s Some Problems with Making Induction Benign. Our high-level thoughts about this problem can be found in Scott Garrabrant and Abram Demski’s recent write-up.
2017 also saw a reduction in our focus on the Alignment for Advanced Machine Learning Systems (AAMLS) agenda. Although we view these problems as highly important, and continue to revisit them regularly, we’ve found AAMLS work to be less obviously tractable than our other research agendas thus far.
On the whole, we continue (at year’s end in 2018) to be very excited by the alignment avenues of attack that we started exploring in earnest in 2017, both with respect to embedded agency and with respect to our newer lines of research.
2017 Research Support Activities
As discussed in our 2018 Update, the new lines of research we’re tackling are much easier to hire for than has been the case for our Agent Foundations research:
This work seems to “give out its own guideposts” more than the Agent Foundations agenda does. While we used to require extremely close fit of our hires on research taste, we now think we have enough sense of the terrain that we can relax those requirements somewhat. We’re still looking for hires who are scientifically innovative and who are fairly close on research taste, but our work is now much more scalable with the number of good mathematicians and engineers working at MIRI.
For that reason, one of our top priorities in 2017 (continuing into 2018) was to set MIRI up to be able to undergo major, sustained growth. We’ve been helped substantially in ramping up our recruitment by Blake Borgeson, a Nature-published computational biologist (and now a MIRI board member) who previously co-founded Recursion Pharmaceuticals and led its machine learning work as CTO.
Concretely, in 2017 we:
- Hired research staff including Sam Eisenstat, Abram Demski, Tsvi Benson-Tilsen, Jesse Liptrap, and Nick Tarleton.
- Ran the AI Summer Fellows Program with CFAR.
- Ran 3 research workshops on the Agent Foundations agenda, the AAMLS agenda, and Paul Christiano’s research agenda. We also ran a large number of internal research retreats and other events.
- Ran software engineer trials where participants spent the summer training up to become research engineers, resulting in a hire.
2017 Conversations and Exposition
One of our 2017 priorities was to sync up and compare models more on the strategic landscape with other existential risk and AI safety groups. For snapshots of some of the discussions over the years, see Daniel Dewey’s thoughts on HRAD and Nate’s response; and more recently, Eliezer Yudkowsky and Paul Christiano’s conversations about Paul’s research proposals.
We also did a fair amount of public dialoguing, exposition, and outreach in 2017. On that front we:
- Released Inadequate Equilibria, a book by Eliezer Yudkowsky on group- and system-level inefficiencies, and when individuals can hope to do better than the status quo.
- Produced research exposition: On Motivations for MIRI’s Highly Reliable Agent Design Research; Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome; Security Mindset and Ordinary Paranoia; Security Mindset and the Logistic Success Curve
- Produced strategy and forecasting exposition: Response to Cegłowski on Superintelligence; There’s No Fire Alarm for Artificial General Intelligence; AlphaGo Zero and the Foom Debate; Why We Should Be Concerned About Artificial Superintelligence; A Reply to Francois Chollet on Intelligence Explosion
- Received press coverage in The Huffington Post, Vanity Fair, Nautilus, and Wired. We were also interviewed for Mark O’Connell’s To Be A Machine and Richard Clarke and R.P. Eddy’s Warnings: Finding Cassandras to Stop Catastrophes.
- Spoke at the O’Reilly AI Conference, and on panels at the Beneficial AI conference and Effective Altruism Global (1, 2).
- Presented papers at TARK 2017, FEW 2017, and the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, and published the Agent Foundations agenda in The Technological Singularity: Managing the Journey.
- Participated in other events, including the “Envisioning and Addressing Adverse AI Outcomes” workshop at Arizona State and the UCLA Colloquium on Catastrophic and Existential Risk.
2017 Finances
Fundraising
2017 was by far our best fundraising year to date. We raised a total of $5,849,500, more than 2.5× what we raised in 2016.3 During our annual fundraiser, we also raised double our highest target. We are very grateful for this incredible show of support. This unexpected fundraising success enabled us to move forward with our growth plans with a lot more confidence, and boosted our recruiting efforts in a variety of ways.4
The large increase in funding we saw in 2017 was significantly driven by:
- A large influx of cryptocurrency contributions, which made up ~42% of our total contributions in 2017. The largest of these were:
- $1.01M in ETH from an anonymous donor.
- $764,970 in ETH from Vitalik Buterin, the inventor and co-founder of Ethereum.
- $367,575 in BTC from Christian Calderon.
- $295,899 in BTC from professional poker players Dan Smith, Tom Crowley and Martin Crowley as part of their Matching Challenge in partnership with Raising for Effective Giving.
- Other contributions including:
- A $1.25M grant disbursement from the Open Philanthropy Project, significantly increased from the $500k grant they awarded MIRI in 2016.
- $200k in grants from the Berkeley Existential Risk Initiative.
As the graph below shows, although our fundraising has increased year over year since 2014, 2017 looks very much like an outlier year relative to our previous growth rate. This was largely driven by the large influx of cryptocurrency contributions, but even excluding those contributions, we raised ~$3.4M, which is 1.5× what we raised in 2016.5
(In this chart and those that follow, “Unlapsed” indicates contributions from past supporters who did not donate in the previous year.)
While the largest contributions drove the overall trend, we saw growth in both the number of contributors and amount contributed across all contributor sizes.
In 2017 we received contributions from 745 unique contributors, 38% more than in 2016 and nearly as many as in 2014 when we participated in SVGives.
Support from international contributors increased from 20% in 2016 to 42% in 2017. This increase was largely driven by the $1.01M ETH donation, but support still increased from 20% to 25% if we ignore this donation. Starting in late 2016, we’ve been working hard to find ways for our international supporters to be able to contribute in a tax-advantaged manner. I expect this percentage to substantially increase in 2018 due to those efforts.6
Spending
In our 2016 fundraiser post, we projected that we’d spend $2–2.2M in 2017. Later in 2017, we revised our estimates to $2.1–2.5M (with a point estimate of $2.25M) along with a breakdown of our estimate across our major budget categories.
Overall, our projections were fairly accurate. Total spending came in at just below $2.1M. The graph below compares our actual spending with our projections.7
The largest deviation from our projected spending came as a result of the researchers who had been working on our AAMLS agenda moving on (on good terms) to other projects.
For past annual reviews, see: 2016, 2015, 2014, and 2013; and for more recent information on what we’ve been up to following 2017, see our 2018 update and fundraiser posts.
- These predictions have been edited below to match Scott’s terminology changes as described in 2018 research plans and predictions. ↩
- Scott coined the name for this problem in his post: The Ubiquitous Converse Lawvere Problem. ↩
- Note that amounts in this section may vary slightly from our audited financial statements, due to small differences between how we tracked donations internally, and how we are required to report them in our financial statements. ↩
- See our 2018 fundraiser post for more information. ↩
- This is similar to 2013, when 33% of our contributions that year came from a single Ripple donation from Jed McCaleb. ↩
- A big thanks to Colm for all the work he’s put into this; have a look at our Tax-Advantaged Donations page for more information. ↩
- Our subsequent budget projections have used a simpler set of major budget categories. I’ve translated our 2017 budget projections into this categorization scheme, for our comparison with actual spending, in order to remain consistent with this new scheme. ↩