A new MIRI FAQ, and other announcements

 |   |  News

MIRI is at Effective Altruism Global! A number of the talks can be watched online at the EA Global Livestream.

We have a new MIRI Frequently Asked Questions page, which we’ll be expanding as we continue getting new questions over the next four weeks. Questions covered so far include “Why is safety important for smarter-than-human AI?” and “Do researchers think AI is imminent?

We’ve also been updating other pages on our website. About MIRI now functions as a short introduction to our mission, and Get Involved has a new consolidated application form for people who want to collaborate with us on our research program.

Finally, an announcement: just two weeks into our six-week fundraiser, we have hit our first major fundraising target! We extend our thanks to the donors who got us here so quickly. Thanks to you, we now have the funds to expand our core research team to 6–8 people for the coming year.

New donations we receive at https://intelligence.org/donate will now go toward our second target: “Accelerated Growth.” If we hit this second target ($500k total), we will be able to expand to a ten-person core team and take on a number of important new projects. More details on our plans if we hit our first two fundraiser targets: Growing MIRI.

MIRI’s Approach

 |   |  Analysis

MIRI’s mission is “to ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” How can we ensure any such thing? It’s a daunting task, especially given that we don’t have any smarter-than-human machines to work with at the moment. In the previous post I discussed four background claims that motivate our mission; in this post I will describe our approach to addressing the challenge.

This challenge is sizeable, and we can only tackle a portion of the problem. For this reason, we specialize. Our two biggest specializing assumptions are as follows:

We focus on scenarios where smarter-than-human machine intelligence is first created in de novo software systems (as opposed to, say, brain emulations).

This is in part because it seems difficult to get all the way to brain emulation before someone reverse-engineers the algorithms used by the brain and uses them in a software system, and in part because we expect that any highly reliable AI system will need to have at least some components built from the ground up for safety and transparency. Nevertheless, it is quite plausible that early superintelligent systems will not be human-designed software, and I strongly endorse research programs that focus on reducing risks along the other pathways.

We specialize almost entirely in technical research.

We select our researchers for their proficiency in mathematics and computer science, rather than forecasting expertise or political acumen. I stress that this is only one part of the puzzle: figuring out how to build the right system is useless if the right system does not in fact get built, and ensuring AI has a positive impact is not simply a technical problem. It is also a global coordination problem, in the face of short-term incentives to cut corners. Addressing these non-technical challenges is an important task that we do not focus on.

In short, MIRI does technical research to ensure that de novo AI software systems will have a positive impact. We do not further discriminate between different types of AI software systems, nor do we make strong claims about exactly how quickly we expect AI systems to attain superintelligence. Rather, our current approach is to select open problems using the following question:

What would we still be unable to solve, even if the challenge were far simpler?

For example, we might study AI alignment problems that we could not solve even if we had lots of computing power and very simple goals.

We then filter on problems that are (1) tractable, in the sense that we can do productive mathematical research on them today; (2) uncrowded, in the sense that the problems are not likely to be addressed during normal capabilities research; and (3) critical, in the sense that they could not be safely delegated to a machine unless we had first solved them ourselves. (Since the goal is to design intelligent machines, there are many technical problems that we can expect to eventually delegate to those machines. But it is difficult to trust an unreliable reasoner with the task of designing reliable reasoning!)

These three filters are usually uncontroversial. The controversial claim here is that the above question — “what would we be unable to solve, even if the challenge were simpler?” — is a generator of open technical problems for which solutions will help us design safer and more reliable AI software in the future, regardless of their architecture. The rest of this post is dedicated to justifying this claim, and describing the reasoning behind it.

Read more »

Four Background Claims

 |   |  Analysis

MIRI’s mission is to ensure that the creation of smarter-than-human artificial intelligence has a positive impact. Why is this mission important, and why do we think that there’s work we can do today to help ensure any such thing?

In this post and my next one, I’ll try to answer those questions. This post will lay out what I see as the four most important premises underlying our mission. Related posts include Eliezer Yudkowsky’s “Five Theses” and Luke Muehlhauser’s “Why MIRI?”; this is my attempt to make explicit the claims that are in the background whenever I assert that our mission is of critical importance.

 

Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.

We call this ability “intelligence,” or “general intelligence.” This isn’t a formal definition — if we knew exactly what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.

Alternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire. (Robin Hanson has argued for versions of this position.)

Short response: I find the “disparate modules” hypothesis implausible in light of how readily humans can gain mastery in domains that are utterly foreign to our ancestors. That’s not to say that general intelligence is some irreducible occult property; it presumably comprises a number of different cognitive faculties and the interactions between them. The whole, however, has the effect of making humans much more cognitively versatile and adaptable than (say) chimpanzees.

Why this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.

Further reading: Salamon et al., “How Intelligible is Intelligence?
 
Read more »

Why Now Matters

 |   |  MIRI Strategy

I’m often asked whether donations now are more important than donations later. Allow me to deliver an emphatic yes: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.

That’s a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It’s quite possible that in a few years’ time significant public funding will be flowing into this field.

(It’s also quite possible that it won’t, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it’s going to be much easier to find funding for AI alignment research in five years’ time).

In other words, the funding bottleneck is loosening — but it isn’t loose yet.

We don’t presently have the funding to grow as fast as we could over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.

Which brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.

There’s an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community’s response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field’s future direction.

People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are less well-understood.

It’s likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.

The alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years’ time. But it’s nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.

 

Targets 1 and 2: Growing MIRI

 |   |  MIRI Strategy

Momentum is picking up in the domain of AI safety engineering. MIRI needs to grow fast if it’s going to remain at the forefront of this new paradigm in AI research. To that end, we’re kicking off our 2015 Summer Fundraiser!

Rather than naming a single funding target, we’ve decided to lay out the activities we could pursue at different funding levels and let you, our donors, decide how quickly we can grow. In this post, I’ll describe what happens if we hit our first two fundraising targets: $250,000 (“continued growth”) and $500,000 (“accelerated growth”).

Read more »

MIRI’s 2015 Summer Fundraiser!

 |   |  MIRI Strategy, News

This last year has been pretty astounding. Since its release twelve months ago, Nick Bostrom’s book Superintelligence has raised awareness about the challenge that MIRI exists to address: long-term risks posed by smarter-than-human artificially intelligent systems. Academic and industry leaders echoed these concerns in an open letter advocating “research aimed at ensuring that increasingly capable AI systems are robust and beneficial.” To jump-start this new safety-focused paradigm in AI, the Future of Life Institute has begun distributing $10M as grants to dozens of research groups, Bostrom and MIRI among them.

MIRI comes to this budding conversation with a host of relevant open problems already in hand. Indeed, a significant portion of the research priorities document accompanying the open letter is drawn from our work on this topic. Having already investigated these issues at some length, MIRI is well-positioned to shape this field as it enters a new phase in its development.

This is a big opportunity. MIRI is already growing and scaling its research activities, but the speed at which we scale in the coming months and years can be increased by more funding. For that reason, MIRI is starting a six-week fundraiser aimed at increasing our rate of growth.

And here it is!

—  Progress Bar  —

Fundraiser progress

 

Rather than running a matching fundraiser with a single fixed donation target, we’ll be letting you help choose MIRI’s course, based on the details of our funding situation and how we would make use of marginal dollars. In particular, we’ll be blogging over the coming weeks about how our plans would scale up at different funding levels:


Target 1 — $250k: Continued growth. At this level, we would have enough funds to maintain a twelve-month runway while continuing all current operations, including running workshops, writing papers, and attending conferences. We will also be able to scale the research team up by one to three additional researchers, on top of our three current researchers and two new researchers who are starting this summer. This would ensure that we have the funding to hire the most promising researchers who come out of the MIRI Summer Fellows Program and our summer workshop series.


Target 2 — $500k: Accelerated growth. At this funding level, we could grow our team more aggressively, while maintaining a twelve-month runway. We would have the funds to expand the research team to about ten core researchers, while also taking on a number of exciting side-projects, such as hiring one or two type theorists. Recruiting specialists in type theory, a field at the intersection of computer science and mathematics, would enable us to develop tools and code that we think are important for studying verification and reflection in artificial reasoners.


Target 3 — $1.5M: Taking MIRI to the next level. At this funding level, we would start reaching beyond the small but dedicated community of mathematicians and computer scientists who are already interested in MIRI’s work. We’d hire a research steward to spend significant time recruiting top mathematicians from around the world, we’d make our job offerings more competitive, and we’d focus on hiring highly qualified specialists in relevant areas of mathematics. This would allow us to grow the research team as fast as is sustainable, while maintaining a twelve-month runway.


Target 4 — $3M: Bolstering our fundamentals. At this level of funding, we’d start shoring up our basic operations. We’d spend resources and experiment to figure out how to build the most effective research team we can. We’d branch out into additional high-value projects outside the scope of our core research program, such as hosting specialized conferences and retreats, upgrading our equipment and online resources, and running programming tournaments to spread interest about certain open problems. At this level of funding we’d also start extending our runway, and prepare for sustained aggressive growth over the coming years.


Target 5 — $6M: A new MIRI. At this point, MIRI would become a qualitatively different organization. With this level of funding, we would start forking the research team into multiple groups attacking the AI alignment problem from very different angles. Our current technical agenda is not the only way to approach the challenges that lie ahead — indeed, there are a number of research teams that we would be thrilled to start up inside MIRI given the opportunity.


We also have plans that extend beyond the $6M level: for more information, shoot me an email at contact@intelligence.org. I also invite you to email me with general questions or to set up a time to chat.

If you intend to make use of corporate matching (check here to see whether your employer will match your donation), email malo@intelligence.org and we’ll include the matching contributions in the fundraiser total.

Some of these targets are quite ambitious, and I’m excited to see what happens when we lay out the available possibilities and let our donors collectively decide how quickly we develop as an organization.

We’ll be using this fundraiser as an opportunity to explain our research and our plans for the future. If you have any questions about what MIRI does and why, email them to rob@intelligence.org. Answers will be posted to this blog every Monday and Friday.

Below is a list of explanatory posts written for this fundraiser, which we’ll be updating regularly:


July 1 — Grants and Fundraisers. Why we’ve decided to experiment with a multi-target fundraiser.
July 16 — An Astounding Year. Recent successes for MIRI, and for the larger field of AI safety.
July 18 — Targets 1 and 2: Growing MIRI. MIRI’s plans if we hit the $250k or $500k funding target.
July 20 — Why Now Matters. Two reasons to give now, rather than wait to give later.
July 24 — Four Background Claims. Basic assumptions behind MIRI’s focus on smarter-than-human AI.
July 27 — MIRI’s Approach. How we identify technical problems to work on.
July 31 — MIRI FAQ. Summarizing common sources of misunderstanding.
August 3 — When AI Accelerates AI. Some reasons to get started on safety work early.
August 7 — Target 3: Taking It To The Next Level. Our plans if we hit the $1.5M funding target.
August 10 — Assessing Our Past And Potential Impact. Why expect MIRI in particular to make a difference?
August 14 — What Sets MIRI Apart? Distinguishing MIRI from groups in academia and industry.
August 18 — Powerful Planners, Not Sentient Software. Why advanced AI isn’t “evil robots.”
August 28 — AI and Effective Altruism. On MIRI’s role in the EA community.


Our hope is that these new resources will help you, our donors, make more informed decisions during our fundraiser, and also that our fundraiser will serve as an opportunity for people to learn a lot more about our activities and strategic outlook.

As scientists, engineers, and policymakers begin to take notice of the AI alignment problem, MIRI is in a unique position to direct this energy and attention in a useful direction. Donating today will help us rise to this challenge and secure a place at the forefront of this critical field.

An Astounding Year

 |   |  News

It’s safe to say that this past year exceeded a lot of people’s expectations.

Twelve months ago, Nick Bostrom’s Superintelligence had just been published. Long-term questions about smarter-than-human AI systems were simply not a part of mainstream discussions about the social impact of AI, and fewer than five people were working on the AI alignment challenge full-time.

Twelve months later, we live in a world where Elon Musk, Bill Gates, and Sam Altman readily cite Superintelligence as a guide to the questions we should be asking about AI’s future as a field. For Gates, the researchers who aren’t concerned about advanced AI systems are the ones who now need to explain their views:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

As far as I can tell, the turning point occurred in January 2015, when Max Tegmark and the newly-formed Future of Life Institute organized a “Future of AI” conference in San Juan, Puerto Rico to bring together top AI academics, top research groups from industry, and representatives of the organizations studying long-term AI risk.

The atmosphere at the Puerto Rico conference was electric. I stepped off the plane expecting to field objections to the notion that superintelligent machines pose a serious risk. Instead, I was met with a rapidly-formed consensus that many challenges lie ahead, and a shared desire to work together to develop a response.

 

Attendees of the January 2015

Attendees of the Puerto Rico conference included, among others, Stuart Russell (co-author of the
leading textbook in AI), Thomas Dietterich (President of AAAI), Francesca Rossi (President of IJCAI),
Bart Selman, Tom Mitchell, Murray Shanahan, Vernor Vinge, Elon Musk, and representatives from
Google DeepMind, Vicarious, FHI, CSER, and MIRI.

This consensus resulted in a widely endorsed open letter, and an accompanying research priorities document that cites MIRI’s past work extensively. Impressed by the speed with which AI researchers were pivoting toward investigating the alignment problem, Elon Musk donated $10M to a grants program aimed at jump-starting this new paradigm in AI research.

Since then, the pace has been picking up. Nick Bostrom received $1.5M of the Elon Musk donation to start a new Strategic Research Center for Artificial Intelligence, which will focus on the geopolitical challenges posed by powerful AI. MIRI has received $300,000 in FLI grants directly to continue its technical and strategic research programs, and participated in a few other collaborative grants. The Cambridge Centre for the Study of Existential Risk has received a number of large grants that have allowed it to begin hiring. Stuart Russell and I recently visited Washington, D.C. to participate in a panel at a leading public policy think tank. We are currently in talks with the NSF about possibilities for extending their funding program to cover some of the concerns raised by the open letter.

The field of AI, too, is taking notice. AAAI, the leading scientific society in AI, hosted its first workshop on safety and ethics (I gave a presentation there), and the two major machine learning conferences — IJCAI and NIPS — will, for the first time, have sessions or workshops dedicated to the discussion of AI safety research.

Years down the line, I expect that some will look back on the Puerto Rico conference as the birthplace of the field of AI alignment. From the outside, 2015 will likely look like the year that AI researchers started seriously considering the massive hurdles that stand between us and the benefits that artificially intelligent systems could bring.

Our long-time backers, however, have seen the work that went into making these last few months possible. It’s thanks to your longstanding support that existential risk mitigation efforts have reached this tipping point. A sizable amount of our current momentum can plausibly be traced back, by one path or another, to exchanges at early summits or on blogs, and to a number of early research and outreach efforts. Thank you for beginning a conversation about these issues long before they began to filter into the mainstream, and thank you for helping us get to where we are now.

Progress at MIRI

Meanwhile at MIRI, the year has been a busy one.

In the wake of the Puerto Rico conference, we’ve been building relationships and continuing our conversations with many different industry groups, including DeepMind, Vicarious, and the newly formed Good AI team. We’ve been thrilled to engage more with the academic community, via a number of collaborative papers that are in the works, two collaborative grants through the FLI grant program, and conversations with various academics about the content of our research program. During the last few weeks, Stuart Russell and Bart Selman have both come on as official MIRI research advisors.

We’ve also been hard at work on the research side. In March, we hired Patrick LaVictoire as a research fellow. We’ve attended a number of conferences, including AAAI’s safety and ethics workshop. We had a great time co-organizing a productive decision theory conference at Cambridge University, where I had the pleasure of introducing our unique take on decision theory (inspired by our need for runnable programs) to a number of academic decision theorists who I both respect and admire — and I’m happy to say that our ideas were very well received.

We’ve produced a number of new resources and results in recent months, including:

  • a series of overview papers describing our technical agenda written in preparation for the Puerto Rico conference;
  • a number of tools that are useful for studying many of these open problems, available at our github repository;
  • a theory of reflective oracle machines (in collaboration with Paul Christiano at U.C. Berkeley), which are a promising step towards both better models of logical uncertainty and better models of agents that reason about other agents that are as powerful (or more powerful) than they are; and
  • a technique for implementing reflection in the HOL theorem-prover (in collaboration with Ramana Kumar at Cambridge University): code here.

We have also launched the Intelligent Agent Foundations Forum to provide a location for publishing and discussing partial results with the broader community working on these problems.

That’s not all, though. After the Puerto Rico conference, we anticipated the momentum that it would create, and we started gearing up for growth. We set up a series of six summer workshops to introduce interested researchers to open problems in AI alignment, and we worked with the Center for Applied Rationality to create a MIRI summer fellows program aimed at helping computer scientists and mathematicians effectively contribute to AI alignment research. We’re now one week into the summer fellows program, and we’ve run four of our six summer workshops.

Our goal with these projects is to loosen our talent bottleneck and find more people who can do MIRI-style AI alignment research, and that has been paying off. Two new researchers have already signed on to start at MIRI in the late summer, and it is likely that we will get a few new hires out of the summer fellows program and the summer workshops as well.

Next steps

We now find ourselves in a wonderful position. The projects listed above have been a lot for a small research team of three, and there’s much more that we hope to take on as we grow the research team further. Where many other groups are just starting to think about how to approach the challenges of AI alignment, MIRI already has a host of ideas that we’re ready to execute on, as soon as we get the personpower and the funding.

The question now is: how quickly can we grow? We already have the funding to sign on an additional researcher (or possibly two) while retaining a twelve-month runway, and it looks like we could grow much faster than that given sufficient funding.

Tomorrow, we’re officially kicking off our summer fundraiser (though you’re welcome to give now at our Donation page). Upcoming posts will describe in more detail what we could do with more funding, but for now I wanted to make it clear why we’re so excited about the state of AI alignment research, and why we think this is a critical moment in the history of the field of AI.

Here’s hoping our next year is half as exciting as this last year was! Thank you again — and stay tuned for our announcement tomorrow.

July 2015 Newsletter

 |   |  Newsletters

Hello, all! I’m Rob Bensinger, MIRI’s Outreach Coordinator. I’ll be keeping you updated on MIRI’s activities and on relevant news items. If you have feedback or questions, you can get in touch with me by email.

Research updates

 
General updates

  • Our team is growing! If you are interested in joining us, click through to see our Office Manager job posting.
  • This was Nate Soares’ first month as our Executive Director. We have big plans in store for the next two months, which Nate has begun laying out here: fundraising thoughts.
  • MIRI has been awarded a $250,000 grant from the Future of Life Institute spanning three years to make headway on our research agenda. This will fund three workshops and several researcher-years of work on a number of open technical problems. MIRI has also been awarded a $49,310 FLI grant to fund strategy research at AI Impacts.
  • Owain Evans of the Future of Humanity Institute, in collaboration with new MIRI hire Jessica Taylor, has been awarded a $227,212 FLI grant to develop algorithms that learn human preferences from behavioral data in the presence of irrational and otherwise suboptimal behavior.
  • Cambridge computational logician Ramana Kumar and MIRI research fellow Benja Fallenstein have been awarded a $36,750 FLI grant to study self-referential reasoning in HOL (higher-order logic) proof assistants.
  • Stuart Russell, co-author of the standard textbook on artificial intelligence, has become a MIRI research advisor.
  • Nate and Stuart Russell participated in a panel discussion about AI risk at the Information Technology and Innovation Foundation, one of the world's leading public policy think tanks: video.
  • On the Effective Altruism Forum, Nate answered a large number of questions about MIRI's strategy and priorities. Excerpts here.

 
News and links