Updates to the research team, and a major donation

 |   |  News

We have several major announcements to make, covering new developments in the two months since our 2017 strategy update:

1. On May 30th, we received a surprise $1.01 million donation from an Ethereum cryptocurrency investor. This is the single largest contribution we have received to date by a large margin, and will have a substantial effect on our plans over the coming year.

2. Two new full-time researchers are joining MIRI: Tsvi Benson-Tilsen and Abram Demski. This comes in the wake of Sam Eisenstat and Marcello Herreshoff’s addition to the team in May. We’ve also begun working with engineers on a trial basis for our new slate of software engineer job openings.

3. Two of our researchers have recently left: Patrick LaVictoire and Jessica Taylor, researchers previously heading work on our “Alignment for Advanced Machine Learning Systems” research agenda.

For more details, see below.


1. Fundraising

The major donation we received at the end of May, totaling $1,006,549, comes from a long-time supporter who had donated roughly $50k to our research programs over many years. This supporter has asked to continue to remain anonymous.

The first half of this year has been the most successful in MIRI’s fundraising history, with other notable contributions including Ethereum donations from investor Eric Rogstad totalling ~$22k, and a ~$67k donation from Octane AI co-founder Leif K-Brooks as part of a Facebook Reactions challenge. In total we’ve raised about $1.45M in the first half of 2017.

We’re thrilled and extremely grateful for this show of support. This fundraising success has increased our runway to around 18–20 months, giving us more leeway to trial potential hires and focus on our research and outreach priorities this year.

Concretely, we have already made several plan adjustments as a consequence, including:

  • moving forward with more confidence on full-time researcher hires,
  • trialing more software engineers, and
  • deciding to run only one fundraiser this year, in the winter.1

This likely is a one-time outlier donation, similar to the $631k in cryptocurrency donations we received from Ripple developer Jed McCaleb in 2013–2014.2 Looking forward at our funding goals over the next two years:

  • While we still have some uncertainty about our 2018 budget, our current point estimate is roughly $2.5M.
  • This year, between support from the Open Philanthropy Project, the Future of Life Institute, and other sources, we expect to receive at least an additional $600k without spending significant time on fundraising.
  • Our tentative (ambitious) goal for the rest of the year is to raise an additional $950k, or $3M in total. This would be sufficient for our 2018 budget even if we expand our engineering team more quickly than expected, and would give us a bit of a buffer to account for uncertainty in our future fundraising (in particular, uncertainty about whether the Open Philanthropy Project will continue support after 2017).

On a five-year timescale, our broad funding goals are:3

  • On the low end, once we finish growing our team over the course of a few years, our default expectation is that our operational costs will be roughly $4M per year, mostly supporting researcher and engineer salaries. Our goal is therefore to reach that level in a sustainable, stable way.
  • On the high end, it’s possible to imagine scenarios involving an order-of-magnitude increase in our funding, in which case we would develop a qualitatively different set of funding goals reflecting the fact that we would most likely substantially restructure MIRI.
  • For funding levels in between—roughly $4M–$10M per year—it is likely that we would not expand our current operations further. Instead, we might fund work outside of our current research after considering how well-positioned we appear to be to identify and fund various projects, including MIRI-external projects. While we consider it reasonably likely that we are in a good position for this, we would instead recommend that donors direct additional donations elsewhere if we ended up concluding that our donors (or other organizations) are in a better position than we are to respond to surprise funding opportunities in the AI alignment space.4

A new major once-off donation at the $1M level like this one covers nearly half of our current annual budget, which makes a substantial difference to our one- and two-year plans. Our five-year plans are largely based on assumptions about multiple-year funding flows, so how aggressively we decide to plan our growth in response to this new donation depends largely on whether we can sustainably raise funds at the level of the above goal in future years (e.g., it depends on whether and how other donors change their level of support in response).

To reduce the uncertainty going into our expansion decisions, we’re encouraging more of our regular donors to sign up for monthly donations or other recurring giving schedules—under 10% of our income currently comes from such donations, which limits our planning capabilities.5 We also encourage supporters to reach out to us about their future donation plans, so that we can answer questions and make more concrete and ambitious plans.


2. New hires

Meanwhile, two new full-time researchers are joining our team after having previously worked with us as associates while based at other institutions.

 

Abram DemskiAbram Demski, who is joining MIRI as a research fellow this month, is completing a PhD in Computer Science at the University of Southern California. His research to date has focused on cognitive architectures and artificial general intelligence. He is interested in filling in the gaps that exist in formal theories of rationality, especially those concerned with what humans are doing when reasoning about mathematics.

Abram made key contributions to the MIRIxLosAngeles work that produced precursor results to logical induction. His other past work with MIRI includes “Generalizing Foundations of Decision Theory” and “Computable Probability Distributions Which Converge on Believing True Π1 Sentences Will Disbelieve True Π2 Sentences.”

 

Tsvi Benson-TilsenTsvi Benson-Tilsen has joined MIRI as an assistant research fellow. Tsvi holds a BSc in Mathematics with honors from the University of Chicago, and is on leave from the UC Berkeley Group in Logic and the Methodology of Science PhD program.

Prior to joining MIRI’s research staff, Tsvi was a co-author on “Logical Induction” and “Formalizing Convergent Instrumental Goals,” and also authored “Updateless Decision Theory With Known Search Order” and “Existence of Distributions That Are Expectation-Reflective and Know It.” Tsvi’s research interests include logical uncertainty, logical counterfactuals, and reflectively stable decision-making.

 

We’ve also accepted our first six software engineers for 3-month visits. We are continuing to review applicants, and in light of the generous support we recently received and the strong pool of applicants so far, we are likely to trial more candidates than we’d planned previously.

In other news, going forward Scott Garrabrant will be acting as the research lead for MIRI’s agent foundations research, handling more of the day-to-day work of coordinating and directing research team efforts.


3. The AAMLS agenda

Our AAMLS research was previously the focus of Jessica Taylor, Patrick LaVictoire, and Andrew Critch, all of whom joined MIRI in mid-2015. With Patrick and Jessica departing (on good terms) and Andrew on a two-year leave to work with the Center for Human-Compatible AI, we will be putting relatively little work into the AAMLS agenda over the coming year.

We continue to see the problems described in the AAMLS agenda as highly important, and expect to reallocate more attention to these problems in the future. Additionally, we see the AAMLS agenda as a good template for identifying safety desiderata and promising alignment problems. However, we did not see enough progress on AAMLS problems over the last year to conclude that we should currently prioritize this line of research over our other work (e.g., our agent foundations research on problems such as logical uncertainty and counterfactual reasoning). As a partial consequence, MIRI’s current research staff do not plan to make AAMLS research a high priority in the near future.

Jessica, the project lead, describes some of her takeaways from working on AAMLS:

[…] Why was little progress made?

[1.] Difficulty

I think the main reason is that the problems were very difficult. In particular, they were mostly selected on the basis of “this seems important and seems plausibly solveable”, rather than any strong intuition that it’s possible to make progress.

In comparison, problems in the agent foundations agenda have seen more progress:

  • Logical uncertainty (Definability of truth, reflective oracles, logical inductors)
  • Decision theory (Modal UDT, reflective oracles, logical inductors)
  • Vingean reflection (Model polymorphism, logical inductors)

One thing to note about these problems is that they were formulated on the basis of a strong intuition that they ought to be solveable. Before logical induction, it was possible to have the intuition that some sort of asymptotic approach could solve many logical uncertainty problems in the limit. It was also possible to strongly think that some sort of self-trust is possible.

With problems in the AAMLS agenda, the plausibility argument was something like:

  • Here’s an existing, flawed approach to the problem (e.g. using a reinforcement signal for environmental goals, or modifications of this approach)
  • Here’s a vague intuition about why it’s possible to do better (e.g. humans do a different thing)

which, empirically, turned out not to make for tractable research problems.

[2.] Going for the throat

In an important sense, the AAMLS agenda is “going for the throat” in a way that other agendas (e.g. the agent foundations agenda) are to a lesser extent: it is attempting to solve the whole alignment problem (including goal specification) given access to resources such as powerful reinforcement learning. Thus, the difficulties of the whole alignment problem (e.g. specification of environmental goals) are more exposed in the problems.

[3.] Theory vs. empiricism

Personally, I strongly lean towards preferring theoretical rather than empirical approaches. I don’t know how much I endorse this bias overall for the set of people working on AI safety as a whole, but it is definitely a personal bias of mine.

Problems in the AAMLS agenda turned out not to be very amenable to purely-theoretical investigation. This is probably due to the fact that there is not a clear mathematical aesthetic for determining what counts as a solution (e.g. for the environmental goals problem, it’s not actually clear that there’s a recognizable mathematical statement for what the problem is).

With the agent foundations agenda, there’s a clearer aesthetic for recognizing good solutions. Most of the problems in the AAMLS agenda have a less-clear aesthetic. […]

For more details, see Jessica’s retrospective on the Intelligent Agent Foundations Forum.

More work would need to go into AAMLS before we reached confident conclusions about the tractability of these problems. However, the lack of initial progress provides some evidence that new tools or perspectives may be needed before significant progress is possible. Over the coming year, we will therefore continue to spend some time thinking about AAMLS, but will not make it a major focus.

We continue to actively collaborate with Andrew on MIRI research, and expect to work with Patrick and Jessica more in the future as well. Jessica and Andrew in particular intend to continue to focus on AI safety research, including work on AI strategy and coordination.

We’re grateful for everything Jessica and Patrick have done to advance our research program and our organizational mission over the past two years, and I’ll personally miss having both of them around.

 

In general, I’m feeling really good about MIRI’s position right now. From our increased financial security and ability to more ambitiously pursue our plans, to the new composition and focus of the research team, the new engineers who are spending time with us, and the growth of the research that they’ll support, things are moving forward quickly and with purpose. Thanks to everyone who has contributed, is contributing, and will contribute in the future to help us do the work here at MIRI.

 


  1. More generally, this will allow us to move forward confidently with the different research programs we consider high-priority, without needing to divert as many resources from other projects to support our top priorities. This should also allow us to make faster progress on the targeted outreach writing we mentioned in our 2017 update, since we won’t have to spend staff time on writing and outreach for a summer fundraiser. 
  2. Of course, we’d be happy if these large donations looked less like outliers in the long run. If readers are looking for something to do with digital currency they might be holding onto after the recent surges, know that we gratefully accept donations of many digital currencies! In total, MIRI has raised around $1.85M in cryptocurrency donations since mid-2013. 
  3. These plans are subject to substantial change. In particular, an important source of variance in our plans is how our new non-public-facing research progresses, where we’re likely to take on more ambitious growth goals if our new work looks like it’s going well. 
  4. We would also likely increase our reserves in this scenario, allowing us to better adapt to unexpected circumstances, and there is a smaller probability that we would use these funds to grow moderately more than currently planned without a significant change in strategy
  5. Less frequent (e.g., quarterly) donations are also quite helpful from our perspective, if we know about them in advance and so can plan around them. In the case of donors who plan to give at least once per year, predictability is much more important from our perspective than frequency.