An Astounding Year

 |   |  News

It’s safe to say that this past year exceeded a lot of people’s expectations.

Twelve months ago, Nick Bostrom’s Superintelligence had just been published. Long-term questions about smarter-than-human AI systems were simply not a part of mainstream discussions about the social impact of AI, and fewer than five people were working on the AI alignment challenge full-time.

Twelve months later, we live in a world where Elon Musk, Bill Gates, and Sam Altman readily cite Superintelligence as a guide to the questions we should be asking about AI’s future as a field. For Gates, the researchers who aren’t concerned about advanced AI systems are the ones who now need to explain their views:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

As far as I can tell, the turning point occurred in January 2015, when Max Tegmark and the newly-formed Future of Life Institute organized a “Future of AI” conference in San Juan, Puerto Rico to bring together top AI academics, top research groups from industry, and representatives of the organizations studying long-term AI risk.

The atmosphere at the Puerto Rico conference was electric. I stepped off the plane expecting to field objections to the notion that superintelligent machines pose a serious risk. Instead, I was met with a rapidly-formed consensus that many challenges lie ahead, and a shared desire to work together to develop a response.

 

Attendees of the January 2015

Attendees of the Puerto Rico conference included, among others, Stuart Russell (co-author of the
leading textbook in AI), Thomas Dietterich (President of AAAI), Francesca Rossi (President of IJCAI),
Bart Selman, Tom Mitchell, Murray Shanahan, Vernor Vinge, Elon Musk, and representatives from
Google DeepMind, Vicarious, FHI, CSER, and MIRI.

This consensus resulted in a widely endorsed open letter, and an accompanying research priorities document that cites MIRI’s past work extensively. Impressed by the speed with which AI researchers were pivoting toward investigating the alignment problem, Elon Musk donated $10M to a grants program aimed at jump-starting this new paradigm in AI research.

Since then, the pace has been picking up. Nick Bostrom received $1.5M of the Elon Musk donation to start a new Strategic Research Center for Artificial Intelligence, which will focus on the geopolitical challenges posed by powerful AI. MIRI has received $300,000 in FLI grants directly to continue its technical and strategic research programs, and participated in a few other collaborative grants. The Cambridge Centre for the Study of Existential Risk has received a number of large grants that have allowed it to begin hiring. Stuart Russell and I recently visited Washington, D.C. to participate in a panel at a leading public policy think tank. We are currently in talks with the NSF about possibilities for extending their funding program to cover some of the concerns raised by the open letter.

The field of AI, too, is taking notice. AAAI, the leading scientific society in AI, hosted its first workshop on safety and ethics (I gave a presentation there), and the two major machine learning conferences — IJCAI and NIPS — will, for the first time, have sessions or workshops dedicated to the discussion of AI safety research.

Years down the line, I expect that some will look back on the Puerto Rico conference as the birthplace of the field of AI alignment. From the outside, 2015 will likely look like the year that AI researchers started seriously considering the massive hurdles that stand between us and the benefits that artificially intelligent systems could bring.

Our long-time backers, however, have seen the work that went into making these last few months possible. It’s thanks to your longstanding support that existential risk mitigation efforts have reached this tipping point. A sizable amount of our current momentum can plausibly be traced back, by one path or another, to exchanges at early summits or on blogs, and to a number of early research and outreach efforts. Thank you for beginning a conversation about these issues long before they began to filter into the mainstream, and thank you for helping us get to where we are now.

Progress at MIRI

Meanwhile at MIRI, the year has been a busy one.

In the wake of the Puerto Rico conference, we’ve been building relationships and continuing our conversations with many different industry groups, including DeepMind, Vicarious, and the newly formed Good AI team. We’ve been thrilled to engage more with the academic community, via a number of collaborative papers that are in the works, two collaborative grants through the FLI grant program, and conversations with various academics about the content of our research program. During the last few weeks, Stuart Russell and Bart Selman have both come on as official MIRI research advisors.

We’ve also been hard at work on the research side. In March, we hired Patrick LaVictoire as a research fellow. We’ve attended a number of conferences, including AAAI’s safety and ethics workshop. We had a great time co-organizing a productive decision theory conference at Cambridge University, where I had the pleasure of introducing our unique take on decision theory (inspired by our need for runnable programs) to a number of academic decision theorists who I both respect and admire — and I’m happy to say that our ideas were very well received.

We’ve produced a number of new resources and results in recent months, including:

  • a series of overview papers describing our technical agenda written in preparation for the Puerto Rico conference;
  • a number of tools that are useful for studying many of these open problems, available at our github repository;
  • a theory of reflective oracle machines (in collaboration with Paul Christiano at U.C. Berkeley), which are a promising step towards both better models of logical uncertainty and better models of agents that reason about other agents that are as powerful (or more powerful) than they are; and
  • a technique for implementing reflection in the HOL theorem-prover (in collaboration with Ramana Kumar at Cambridge University): code here.

We have also launched the Intelligent Agent Foundations Forum to provide a location for publishing and discussing partial results with the broader community working on these problems.

That’s not all, though. After the Puerto Rico conference, we anticipated the momentum that it would create, and we started gearing up for growth. We set up a series of six summer workshops to introduce interested researchers to open problems in AI alignment, and we worked with the Center for Applied Rationality to create a MIRI summer fellows program aimed at helping computer scientists and mathematicians effectively contribute to AI alignment research. We’re now one week into the summer fellows program, and we’ve run four of our six summer workshops.

Our goal with these projects is to loosen our talent bottleneck and find more people who can do MIRI-style AI alignment research, and that has been paying off. Two new researchers have already signed on to start at MIRI in the late summer, and it is likely that we will get a few new hires out of the summer fellows program and the summer workshops as well.

Next steps

We now find ourselves in a wonderful position. The projects listed above have been a lot for a small research team of three, and there’s much more that we hope to take on as we grow the research team further. Where many other groups are just starting to think about how to approach the challenges of AI alignment, MIRI already has a host of ideas that we’re ready to execute on, as soon as we get the personpower and the funding.

The question now is: how quickly can we grow? We already have the funding to sign on an additional researcher (or possibly two) while retaining a twelve-month runway, and it looks like we could grow much faster than that given sufficient funding.

Tomorrow, we’re officially kicking off our summer fundraiser (though you’re welcome to give now at our Donation page). Upcoming posts will describe in more detail what we could do with more funding, but for now I wanted to make it clear why we’re so excited about the state of AI alignment research, and why we think this is a critical moment in the history of the field of AI.

Here’s hoping our next year is half as exciting as this last year was! Thank you again — and stay tuned for our announcement tomorrow.