MIRI’s Strategy for 2013

 |   |  MIRI Strategy

This post is not a detailed strategic plan. For now, I just want to provide an update on what MIRI is doing in 2013 and why.

Our mission remains the same. The creation of smarter-than-human intelligence will likely be the most significant event in human history, and MIRI exists to help ensure that this event has a positive impact.

Still, much has changed in the past year:

  • The short-term goals in our August 2011 strategic plan were largely accomplished.
  • We changed our name from “The Singularity Institute” to “The Machine Intelligence Research Institute” (MIRI).
  • We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing one thing: research. Rationality training was spun out to a separate organization, CFAR, and the Summit was acquired by Singularity University. We still co-produce the Singularity Summit with Singularity University, but this requires limited effort on our part.
  • After dozens of hours of strategic planning in January–March 2013, and with input from 20+ external advisors, we’ve decided to (1) put less effort into public outreach, and to (2) shift our research priorities to Friendly AI math research.

It’s this last pair of changes I’d like to explain in more detail below.

Less effort into public outreach

In the past, public outreach has been a major focus of MIRI’s efforts, in particular through the annual Singularity Summit. These efforts brought our mission to thousands of people and grew our networks substantially. But in 2013 we’ve decided to invest much less effort into public outreach, for two reasons:

  • It’s not clear that additional public outreach has high marginal value.
  • FHI at Oxford University has recently increased its public outreach efforts on the topic of human-friendly AI, and CSER at Cambridge University is beginning to do the same. Their outreach efforts benefit from elite university prestige that MIRI cannot match. (See, for example, the November 2012 media coverage of CSER.)

Three kinds of research

Historically, MIRI has produced three kinds of research.

Expository research. Some of our work consolidates and clarifies the strategic research previously only available in conversation with experts (e.g. at MIRI or FHI) or in a written but disorganized form (e.g. in mailing list archives). Our expository publications make it easier for researchers around the world to understand the current state of knowledge and build on it, but the task of organizing and clearly explaining previous work often requires a significant amount of research effort itself. Examples of this sort of work, from MIRI and from others, include Chalmers (2010), Muehlhauser & Helm (2013), Muehlhauser & Salamon (2013), Yampolskiy & Fox (2012), and (much of) Nick Bostrom‘s forthcoming scholarly monograph on machine superintelligence.

Strategic research. Probabilistically nudging the future away from bad outcomes and toward good outcomes is a tricky business. Prediction is hard, and the causal structure of the world is complex. Nevertheless we agree with Oxford’s Future of Humanity Institute that careful research today can improve our chances of navigating the future successfully. (See Bostrom’s “Technological Revolutions: Ethics and Policy in the Dark.”) Therefore, some of MIRI’s research (often in collaboration with FHI) has focused on improving our understanding of how technologies will evolve and which interventions available today are most promising: Yudkowsky (2013); Shulman & Bostrom (2012); Armstrong & Sotala (2012); Kaas et al. (2010); Shulman (2010); Rayhawk et al. (2009).

Friendly AI research. One promising approach to mitigating AI risks (and all other catastrophic risks) is to build a stably self-improving AI with humane values — a “Friendly AI” or “FAI.”

There are two types of open problems in FAI theory: “philosophy problems” and “math problems.” FAI philosophy problems are so confusing to humanity that we don’t even know how to state them crisply at this time, such as the problem of extrapolating human values. In contrast, FAI math problems can be stated crisply enough to be math problems, e.g. the Löbian obstacle to self-modifying systems. Our hope is that in time, all FAI philosophy problems will be clarified into math problems, as has happened to many philosophical questions before them: see Kolmogorov (1965) on complexity and simplicity, Solomonoff (1964a, 1964b) on induction, Von Neumann and Morgenstern (1947) on rationality, Shannon (1948) on information, and Tennenholtz’s development of “program equilibrium” from Hofstadter’s “superrationality” (for an overview, see Woolridge 2012).

Some examples of FAI philosophy research are Muehlhauser & Williamson (2012); Yudkowsky (2010); Yudkowsky (2004). Some examples of FAI math research are Christiano et al. (2013); LaVictoire et al. (2013); Dewey (2011, 2012); de Blanc (2011).

A shift to FAI math research

MIRI’s expository research has been highly useful — not because many people encounter our work in academic journals or books, but because thousands of people read these succinct explanations of our research mission upon encountering our website, and because we send these papers to dozens of personal contacts each month after they express an interest in our work.

But it’s not clear that additional expository work, of the kind we can easily purchase, is of high value after (1) the expository work MIRI and others have done so far, (2) Sotala & Yampolskiy’s forthcoming survey article on proposals for handling AI risk, and (3) Bostrom’s forthcoming book on machine superintelligence. Thus, we decided to not invest much in expository research in 2013.

What about strategic research? We believe additional strategic research has high value if it is of high quality. One of our staff researchers (Carl Shulman) will spend nearly all of 2013 on strategic research, and another staff researcher (Eliezer Yudkowsky) spent most of January–March on strategic research (specifically, intelligence explosion microeconomics). We are also pleased to see FHI’s strategic research on the subject, for example in Nick Bostrom’s forthcoming book.

Still, strategic research will consume a minority of our research budget in 2013 because:

  • Valuable strategic research on AI risk reduction is difficult to purchase. Very few people have the degree of domain knowledge and analytic ability to contribute. Moreover, it’s difficult for others to “catch up,” because most of the analysis that has been done hasn’t been written up clearly. (Bostrom’s book should help with that, though.)
  • MIRI has a comparative advantage in Friendly AI research. MIRI’s Eliezer Yudkowsky has done more than anyone to develop the technical side of Friendly AI theory, and MIRI now acts as a hub for Friendly AI research, for example by hosting workshops focused on FAI math research.
  • Math research can get academic “traction” more easily than strategic research can. Strategic research on AI risk often fails to get much academic traction because it tends to be interdisciplinary, necessarily speculative, and dependent on several assumptions that may not be shared by other researchers (e.g. causal functionalism, AI timelines agnosticism, value fragility, or the orthogonality thesis). In contrast, any mathematician with the right background knowledge can grok a crisply stated math problem very quickly, and he or she can be interested to work on it whether or not they think it’s a socially important problem like MIRI does. Within hours of posting a draft of a recent math result to our blog, Fields medalist Timothy Gowers had seen the draft and commented on it (here), along with several other professional mathematicians.

Finally: why did we choose to prioritize FAI math research over FAI philosophy research, for 2013? Our reasons are similar to our reasons for focusing on FAI math research over strategic research: (1) valuable FAI philosophy research is difficult to purchase, and (2) math research can get academic traction more easily than philosophy research can.

Math research activities in 2013

Which specific actions will we take in 2013 to produce FAI math research?

  • We will host several math research workshops. The first MIRI research workshop (4 participants), in November 2012, was surprisingly productive. It led to the production of a new probabilistic logic that serves as a “loophole” in Tarski’s undefinability theorem (1936), and also to an early-form probabilistic set theory. Our second MIRI research workshop (12 participants) is currently ongoing. Additional workshops this year will probably have 4-8 participants, will cost only ~$5000/ea, and will probably produce further FAI math research progress while also allowing MIRI to test many hypotheses about how to efficiently produce such progress.
  • Eliezer will describe several open math problems in Friendly AI theory. Eliezer is currently drafting an explanation of the Löbian obstacle to self-modifying systems, and may write explanations of some other open problems, so that mathematicians can see what open math problems in FAI theory are available to work on.
  • We will host several visiting fellows. A visiting fellowship at MIRI is often the best way to get “up to speed” on MIRI’s mathematical research agenda, and mathematically-inclined researchers are encouraged to apply.
  • We may hire new mathematical researchers, but we might not. We are somewhat funding limited when it comes to hiring new researchers. More to the point, we think Lean Nonprofit principles are important. That is, we think it’s important to rapidly and cheaply test hypotheses about how to produce FAI math research efficiently, and running small research workshops with a variety of structures and a variety of researchers is better for that than hiring is. We are more likely to hire new researchers after we have more evidence about how best to efficiently produce FAI math research.

How you can help

If you know any smart, productive mathematicians with the right kind of background to contribute to our work, please encourage them to contact us (malo@intelligence.org) about our research workshops, visiting fellowships, and research positions.

You can also support us financially or as a volunteer.