An Astounding Year

 |   |  News

It’s safe to say that this past year exceeded a lot of people’s expectations.

Twelve months ago, Nick Bostrom’s Superintelligence had just been published. Long-term questions about smarter-than-human AI systems were simply not a part of mainstream discussions about the social impact of AI, and fewer than five people were working on the AI alignment challenge full-time.

Twelve months later, we live in a world where Elon Musk, Bill Gates, and Sam Altman readily cite Superintelligence as a guide to the questions we should be asking about AI’s future as a field. For Gates, the researchers who aren’t concerned about advanced AI systems are the ones who now need to explain their views:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

As far as I can tell, the turning point occurred in January 2015, when Max Tegmark and the newly-formed Future of Life Institute organized a “Future of AI” conference in San Juan, Puerto Rico to bring together top AI academics, top research groups from industry, and representatives of the organizations studying long-term AI risk.

The atmosphere at the Puerto Rico conference was electric. I stepped off the plane expecting to field objections to the notion that superintelligent machines pose a serious risk. Instead, I was met with a rapidly-formed consensus that many challenges lie ahead, and a shared desire to work together to develop a response.

 

Attendees of the January 2015

Attendees of the Puerto Rico conference included, among others, Stuart Russell (co-author of the
leading textbook in AI), Thomas Dietterich (President of AAAI), Francesca Rossi (President of IJCAI),
Bart Selman, Tom Mitchell, Murray Shanahan, Vernor Vinge, Elon Musk, and representatives from
Google DeepMind, Vicarious, FHI, CSER, and MIRI.

This consensus resulted in a widely endorsed open letter, and an accompanying research priorities document that cites MIRI’s past work extensively. Impressed by the speed with which AI researchers were pivoting toward investigating the alignment problem, Elon Musk donated $10M to a grants program aimed at jump-starting this new paradigm in AI research.

Since then, the pace has been picking up. Nick Bostrom received $1.5M of the Elon Musk donation to start a new Strategic Research Center for Artificial Intelligence, which will focus on the geopolitical challenges posed by powerful AI. MIRI has received $300,000 in FLI grants directly to continue its technical and strategic research programs, and participated in a few other collaborative grants. The Cambridge Centre for the Study of Existential Risk has received a number of large grants that have allowed it to begin hiring. Stuart Russell and I recently visited Washington, D.C. to participate in a panel at a leading public policy think tank. We are currently in talks with the NSF about possibilities for extending their funding program to cover some of the concerns raised by the open letter.

The field of AI, too, is taking notice. AAAI, the leading scientific society in AI, hosted its first workshop on safety and ethics (I gave a presentation there), and the two major machine learning conferences — IJCAI and NIPS — will, for the first time, have sessions or workshops dedicated to the discussion of AI safety research.

Years down the line, I expect that some will look back on the Puerto Rico conference as the birthplace of the field of AI alignment. From the outside, 2015 will likely look like the year that AI researchers started seriously considering the massive hurdles that stand between us and the benefits that artificially intelligent systems could bring.

Our long-time backers, however, have seen the work that went into making these last few months possible. It’s thanks to your longstanding support that existential risk mitigation efforts have reached this tipping point. A sizable amount of our current momentum can plausibly be traced back, by one path or another, to exchanges at early summits or on blogs, and to a number of early research and outreach efforts. Thank you for beginning a conversation about these issues long before they began to filter into the mainstream, and thank you for helping us get to where we are now.

Progress at MIRI

Meanwhile at MIRI, the year has been a busy one.

In the wake of the Puerto Rico conference, we’ve been building relationships and continuing our conversations with many different industry groups, including DeepMind, Vicarious, and the newly formed Good AI team. We’ve been thrilled to engage more with the academic community, via a number of collaborative papers that are in the works, two collaborative grants through the FLI grant program, and conversations with various academics about the content of our research program. During the last few weeks, Stuart Russell and Bart Selman have both come on as official MIRI research advisors.

We’ve also been hard at work on the research side. In March, we hired Patrick LaVictoire as a research fellow. We’ve attended a number of conferences, including AAAI’s safety and ethics workshop. We had a great time co-organizing a productive decision theory conference at Cambridge University, where I had the pleasure of introducing our unique take on decision theory (inspired by our need for runnable programs) to a number of academic decision theorists who I both respect and admire — and I’m happy to say that our ideas were very well received.

We’ve produced a number of new resources and results in recent months, including:

  • a series of overview papers describing our technical agenda written in preparation for the Puerto Rico conference;
  • a number of tools that are useful for studying many of these open problems, available at our github repository;
  • a theory of reflective oracle machines (in collaboration with Paul Christiano at U.C. Berkeley), which are a promising step towards both better models of logical uncertainty and better models of agents that reason about other agents that are as powerful (or more powerful) than they are; and
  • a technique for implementing reflection in the HOL theorem-prover (in collaboration with Ramana Kumar at Cambridge University): code here.

We have also launched the Intelligent Agent Foundations Forum to provide a location for publishing and discussing partial results with the broader community working on these problems.

That’s not all, though. After the Puerto Rico conference, we anticipated the momentum that it would create, and we started gearing up for growth. We set up a series of six summer workshops to introduce interested researchers to open problems in AI alignment, and we worked with the Center for Applied Rationality to create a MIRI summer fellows program aimed at helping computer scientists and mathematicians effectively contribute to AI alignment research. We’re now one week into the summer fellows program, and we’ve run four of our six summer workshops.

Our goal with these projects is to loosen our talent bottleneck and find more people who can do MIRI-style AI alignment research, and that has been paying off. Two new researchers have already signed on to start at MIRI in the late summer, and it is likely that we will get a few new hires out of the summer fellows program and the summer workshops as well.

Next steps

We now find ourselves in a wonderful position. The projects listed above have been a lot for a small research team of three, and there’s much more that we hope to take on as we grow the research team further. Where many other groups are just starting to think about how to approach the challenges of AI alignment, MIRI already has a host of ideas that we’re ready to execute on, as soon as we get the personpower and the funding.

The question now is: how quickly can we grow? We already have the funding to sign on an additional researcher (or possibly two) while retaining a twelve-month runway, and it looks like we could grow much faster than that given sufficient funding.

Tomorrow, we’re officially kicking off our summer fundraiser (though you’re welcome to give now at our Donation page). Upcoming posts will describe in more detail what we could do with more funding, but for now I wanted to make it clear why we’re so excited about the state of AI alignment research, and why we think this is a critical moment in the history of the field of AI.

Here’s hoping our next year is half as exciting as this last year was! Thank you again — and stay tuned for our announcement tomorrow.

July 2015 Newsletter

 |   |  Newsletters

Hello, all! I’m Rob Bensinger, MIRI’s Outreach Coordinator. I’ll be keeping you updated on MIRI’s activities and on relevant news items. If you have feedback or questions, you can get in touch with me by email.

Research updates

 
General updates

  • Our team is growing! If you are interested in joining us, click through to see our Office Manager job posting.
  • This was Nate Soares’ first month as our Executive Director. We have big plans in store for the next two months, which Nate has begun laying out here: fundraising thoughts.
  • MIRI has been awarded a $250,000 grant from the Future of Life Institute spanning three years to make headway on our research agenda. This will fund three workshops and several researcher-years of work on a number of open technical problems. MIRI has also been awarded a $49,310 FLI grant to fund strategy research at AI Impacts.
  • Owain Evans of the Future of Humanity Institute, in collaboration with new MIRI hire Jessica Taylor, has been awarded a $227,212 FLI grant to develop algorithms that learn human preferences from behavioral data in the presence of irrational and otherwise suboptimal behavior.
  • Cambridge computational logician Ramana Kumar and MIRI research fellow Benja Fallenstein have been awarded a $36,750 FLI grant to study self-referential reasoning in HOL (higher-order logic) proof assistants.
  • Stuart Russell, co-author of the standard textbook on artificial intelligence, has become a MIRI research advisor.
  • Nate and Stuart Russell participated in a panel discussion about AI risk at the Information Technology and Innovation Foundation, one of the world's leading public policy think tanks: video.
  • On the Effective Altruism Forum, Nate answered a large number of questions about MIRI's strategy and priorities. Excerpts here.

 
News and links

Grants and fundraisers

 |   |  News

Two big announcements today:

1. MIRI has won $299,310 from the Future of Life Institute’s grant program to jumpstart the field of long-term AI safety research.

  • $250,000 will go to our research program over the course of three years. This will go towards running workshops and funding a few person-years of research on the open problems discussed in our technical agenda.
  • $49,310 will go towards AI Impacts, a project which aims to shed light on the implications of advanced artificial intelligence using empirical data and rigorous analysis.

MIRI will also collaborate with the primary investigators on two other large FLI grants:

  • $227,212 has been awarded to Owain Evans at the Future of Humanity Institute to develop algorithms that learn human preferences from data despite human irrationalities. This will be carried out in collaboration with Jessica Taylor, who will become a MIRI research fellow at the end of this summer.
  • $36,750 has been awarded to Ramana Kumar at Cambridge University to study self-reference in the HOL theorem prover. This will be done in collaboration with MIRI research fellow Benja Fallenstein.

The money comes from Elon Musk’s extraordinary donation of $10M to fund FLI’s first-of-its-kind grant competition for research aimed at keeping AI technologies beneficial as capabilities improve.

This funding, coming on the heels of the payments from our sale of the Singularity Summit (which recently concluded) and an extremely generous surprise donation from Jed McCaleb at the end of 2013, means we can continue to ramp up our research efforts. That doesn’t mean our job is done, of course. In January, shortly after the FLI conference, we came to the conclusion that the funding situation for our field was set to improve, and decided to start gearing up for growth. That prediction has turned out to be correct, which puts us in an excellent position.

We’re now, indeed, set to grow—the only question is, “How quickly?” Which brings me to announcement number two.

2. Our summer fundraiser is starting in mid-July, and we’re going to try something new.

Every summer for the past few years, MIRI has run a matching fundraiser, where we get some of our biggest donors to pledge their donations conditional upon your support. Conventional wisdom states that matching fundraisers make it easier to raise funds, and MIRI has had a lot of success with them in the past. They seem to be an excellent way to get donors excited, and the deadline helps create a sense of urgency.

However, a few different people, including the folks over at GiveWell and effective altruism writer Ben Kuhn, have voiced skepticism about the effectiveness of matching fundraisers. Most of our large donors are happy to donate regardless of whether we raise matching funds, and matching fundraisers tend to put the focus on interactions between small and large donors, rather than on the exciting projects that we could be running with sufficient funding.

Our experience with our donors has been that they are exceptionally thoughtful, and that they have thought themselves about how (and how quickly) they want MIRI to grow. So this fundraiser we’d like to give you more resources to make an informed decision about where to send your money, with better knowledge about how different levels of funding will affect our operations.

Details are forthcoming mid-July, along with a whole lot more information about what we’ve been up to and what we have planned.

As always, thanks for everything: it’s exciting to receive one of the very first grants in this burgeoning field, and we haven’t forgotten that it’s only thanks to your support that the field has made it this far in the first place.

Wanted: Office Manager (aka Force Multiplier)

 |   |  News

We’re looking for a full-time office manager to support our growing team. It’s a big job that requires organization, initiative, technical chops, and superlative communication skills. You’ll develop, improve, and manage the processes and systems that make us a super-effective organization. You’ll obsess over our processes (faster! easier!) and our systems (simplify! simplify!). Essentially, it’s your job to ensure that everyone at MIRI, including you, is able to focus on their work and Get Sh*t Done.

That’s a super-brief intro to what you’ll be working on. But first, you need to know if you’ll even like working here.

Read more »

New report: “The Asilomar Conference: A Case Study in Risk Mitigation”

 |   |  Papers

Today we release a new report by Katja Grace, “The Asilomar Conference: A Case Study in Risk Mitigation” (PDF, 67pp).

The 1975 Asilomar Conference on Recombinant DNA is sometimes cited as an example of successful action by scientists who preemptively identified an emerging technology’s potential dangers and intervened to mitigate the risk. We conducted this investigation to check whether that basic story is true, and what lessons those events might carry for AI and other unprecedented technological risks.

To prepare this report, Grace consulted several primary and secondary sources, and also conducted four interviews that are cited in the report. The interviews are published here:

The basic conclusions of this report, which have not been separately vetted, are:

  1. The specific dangers that motivated the Asilomar conference were relatively immediate, rather than long-term. These dangers turned out to be effectively nonexistent. Experts disagree as to whether scientists should have known better with the information they had at the time.
  2. The conference appears to have caused improvements in general lab safety practices.
  3. The conference plausibly averted regulation and helped scientists to be on better terms with the public. Whether these effects are positive for society depends on (e.g.) whether it is better for this category of scientific activities to go unregulated, a question not addressed by this report.

June 2015 Newsletter

 |   |  Newsletters


Machine Intelligence Research Institute

Dear friends of MIRI,

As we announced on May 6th, I've decided to take a research position at GiveWell. With unanimous support from the Board, MIRI research fellow Nate Soares will be taking my place as Executive Director starting June 1st. Nate has introduced himself here.

I’m proud of what the MIRI team has accomplished during my tenure as Executive Director, and I'm excited to watch Nate take MIRI to the next level. My enthusiasm for MIRI’s work remains as strong as ever, and I look forward to supporting MIRI going forward, both financially and as a close advisor. (See here for further details on my transition to GiveWell.)

Thank you all for your support!

– Luke Muehlhauser

Research updates

News updates

Other updates

  • Nick Bostrom's TED talk on machine superintelligence.
  • Effective Altruism Global is this August, in the San Francisco Bay Area (USA), Oxford (UK), and Melbourne (Australia). Keynote speaker is Elon Musk. Apply by June 10th!

Introductions

 |   |  News

natesoares

Hello, I’m Nate Soares, and I’m pleased to be taking the reins at MIRI on Monday morning.

For those who don’t know me, I’ve been a research fellow at MIRI for a little over a year now. I attended my first MIRI workshop in December of 2013 while I was still working at Google, and was offered a job soon after. Over the last year, I wrote a dozen papers, half as primary author. Six of those papers were written for the MIRI technical agenda, which we compiled in preparation for the Puerto Rico conference put on by FLI in January 2015. Our technical agenda is cited extensively in the research priorities document referenced by the open letter that came out of that conference. In addition to the Puerto Rico conference, I attended five other conferences over the course of the year, and gave a talk at three of them. I also put together the MIRI research guide (a resource for students interested in getting involved with AI alignment research), and of course I spent a fair bit of time doing the actual research at workshops, at researcher retreats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.

I’ve always had a natural inclination towards leadership: in the past, I’ve led a F.I.R.S.T. Robotics team, managed two volunteer theaters, served as president of an Entrepreneur’s Club, and co-founded a startup or two. However, this is the first time I’ve taken a professional leadership role, and I’m grateful that I’ll be able to call upon the experience and expertise of the board, of our advisors, and of outgoing executive director Luke Muehlhauser.

MIRI has improved greatly under Luke’s guidance these last few years, and I’m honored to have the opportunity to continue that trend. I’ve spent a lot of time in conversation with Luke over the past few weeks, and he’ll remain a close advisor going forward. He and the management team have spent the last year or so really tightening up the day-to-day operations at MIRI, and I’m excited about all the opportunities we have open to us now.

The last year has been pretty incredible. Discussion of long-term AI risks and benefits has finally hit the mainstream, thanks to the success of Bostrom’s Superintelligence and FLI’s Puerto Rico conference, and due in no small part to years of movement-building and effort made possible by MIRI’s supporters. Over the last year, I’ve forged close connections with our friends at the Future of Humanity Institute, the Future of Life Institute, and the Centre for the Study of Existential Risk, as well as with a number of industry teams and academic groups who are focused on long-term AI research. I’m looking forward to our continued participation in the global conversation about the future of AI. These are exciting times in our field, and MIRI is well-poised to grow and expand. Indeed, one of my top priorities as executive director is to grow the research team.

That project is already well under way. I’m pleased to announce that Jessica Taylor has accepted a full-time position as a MIRI researcher starting in August 2015. We are also hosting a series of summer workshops focused on various technical AI alignment problems, the second of which is just now concluding. Additionally, we are working with the Center for Applied Rationality to put on a summer fellows program designed for people interested in gaining the skills needed for research in the field of AI alignment.

I want to take a moment to extend my heartfelt thanks to all those supporters of MIRI who have brought us to where we are today: We have a slew of opportunities before us, and it’s all thanks to your effort and support these past years. MIRI couldn’t have made it as far as it has without you. Exciting times are ahead, and your continued support will allow us to grow quickly and pursue all the opportunities that the last year opened up.

Finally, in case you want to get to know me a little better, I’ll be answering questions on the effective altruism forum at 3PM Pacific time on Thursday June 11th.

Onwards,

Nate

Two papers accepted to AGI-15

 |   |  News

MIRI has two papers forthcoming in the conference proceedings of AGI-15. The first paper, previously released as a MIRI technical report, is “Reflective variants of Solomonoff induction and AIXI,” by Benja Fallenstein, Nate Soares, and Jessica Taylor.

Two attemptsThe second paper, “Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings,” by Nate Soares and Benja Fallenstein, is a compressed version of some material from an earlier technical report. This new paper’s abstract is:

This paper motivates the study of counterpossibles (logically impossible counterfactuals) as necessary for developing a decision theory suitable for generally intelligent agents embedded within their environments. We discuss two attempts to formalize a decision theory using counterpossibles, one based on graphical models and another based on proof search.

Fallenstein will be attending AGI-15.