2013 in Review: Strategic and Expository Research

 |   |  MIRI Strategy

This is the 3rd part of my personal and qualitative self-review of MIRI in 2013, in which I begin to review MIRI’s 2013 research activities. By “research activities” I mean to include outreach efforts primarily aimed at researchers, and also three types of research performed by MIRI:

I’ll review MIRI’s strategic and expository research in this post; my review of MIRI’s 2013 Friendly AI research will appear in a future post. For the rest of this post, I usually won’t try to distinguish which writings are “expository” vs. “strategic” research, since most of them are partially of both kinds.

Strategic and expository research in 2013

  1. In 2013, our public-facing strategic and expository research consisted of 4 papers published directly by MIRI, 4 journal-targeted papers, 4 chapters in a peer-reviewed book, 9 in-depth analysis blog posts, 14 short analysis blog posts, and 16 interviews with domain experts.
  2. I think these efforts largely accomplished the goals at which they were aimed, but in 2013 we learned a great deal about how to accomplish those goals more efficiently in the future. In particular…
  3. Expert interviews seem to be the most efficient way to accomplish some of those goals.
  4. Rather than conducting large strategic research projects ourselves, we should focus on writing up what is already known (“expository research”) and on describing open research questions so that others can examine them.

What we did in 2013 and why

Below I list the writings that constitute MIRI’s public-facing2 strategic and expository research in 2013.

MIRI staff members have varying opinions about the value and purpose of strategic and expository research. Speaking for myself, I supported or conducted the above research activities in order to:9

  1. Test our assumptions and try to understand the views of people who (might) disagree with us. Examples: “How effectively can we plan for future decades?”, “How well will policy-makers handle AGI?”, and the Greg Morrisett interview.
  2. Learn new things that can inform strategic action concerning existential risk and Friendly AI. Examples: “Algorithmic Progress in Six Domains,” the Hadi Esmaeilzadeh interview, and the Josef Urban interview.
  3. Make it easier for other researchers to contribute, by performing small bits of initial work on questions of strategic significance, or by explaining how an open question in superintelligence strategy could be studied in more depth. Examples: “Intelligence Explosion Microeconomics,” “Algorithmic Progress in Six Domains,” and “How effectively can we plan for future decades?”
  4. Build relationships with researchers who might one day contribute to strategic, expository, or Friendly AI research. Examples: many of the interviews.
  5. Explain small “pieces of the puzzle” that contribute to MIRI-typical views about existential risk and Friendly AI. Examples: “When Will AI Be Created?”, “Mathematical Proofs Improve But Don’t Guarantee…,” and the Nick Beckstead interview.

How well did these efforts achieve their goals?

We have not yet implemented quantitative methods for measuring how well our strategic and expository research efforts are meeting the goals at which they are aimed.10 For now, I can only share my subjective, qualitative impressions, based on my own reasoning and a few conversations I had with some people who follow our research closely, after showing them a near-complete draft of the previous section.

Re: goal (1). It’s difficult to locate cheap, strong tests of our assumptions. So, the research aimed at this goal conducted in 2013 either weakly confirmed some of our assumptions (e.g. see the Greg Morrisett interview11 ) or could make only small steps toward providing good tests of our assumptions (e.g. see “How effectively can we plan for future decades?” and “How well will policy-makers handle AGI?”).

Re: goal (2). Similarly, it’s difficult to locate inexpensive evidence that robustly pins down the value of an important strategic variable (e.g. AI timelines, AI takeoff speed, or the strength of the “convergent instrumental values” attractor in mind design space). Hence, research aimed at learning new things typically only provides small updates (for us, anyway), e.g. about the prospects for Moore’s law (the Hadi Esmaeilzadeh interview) and about the current state of automated mathematical reasoning (the Josef Urban interview).

My own reaction to the difficulty of obtaining additional high-likelihood-ratio evidence about long-term AI futures goes something like this:

Well, the good news is that humanity seems to have seized most of the low-hanging fruit about future machine superintelligence, which wasn’t the case 15 years ago. The bad news is that the low-hanging fruit alone doesn’t make it clear how we go about winning. But since the stakes are really high, we just have to accept that long-term forecasting is hard, and then try harder. We need to get more researchers involved so more research can be produced, and we must be prepared to accept that it might take 10 PhD theses worth of work before we get a 2:1 Bayesian update about a strategically relevant variable. Also, it’s probably good to “marinate” one’s brain in relevant fields even if one isn’t sure which specific updates one will be able to make as a result, because filling one’s brain with facts about relevant fields will likely improve one’s intuitions in general about those fields and adjacent fields.12

Re: goal (3). I don’t have a good sense of how useful MIRI’s 2013 strategic and expository research has been for other researchers, but such effects typically require several years to materialize.13 I’m optimistic about this work enabling further research by others simply because that’s how things typically work in other fields of research, and I don’t see much reason to think that superintelligence strategy will be any different.

Re: goal (4). Yes, many of the interviews built new relationships with helpful domain experts.

Re: goal (5). Again, I don’t have good measures of the effects here, but I do receive frequent comments from community members that “such-and-such post was really clarifying.” Some of the analyses are also linked regularly by other groups. For example, both GiveWell and 80,000 Hours have linked to our model combination post when explaining their own research strategies.

 

Looking ahead to 2014

As discussed above and in my operations review, we still need to find better ways to measure the impact of our research. A plausible first-try measurement technique would be to survey a subset of the people we hope to impact in various ways, and ask how our research has impacted them.

Even before we can learn from improved impact measurement, however, I think I can say a few things about what I’ve learned about doing strategic and expository research, and what we plan to do differently in 2014.

First, interviews with domain experts are a highly efficient way to achieve some of the goals I have for expository and strategic research. Each interview required only a few hours of staff time, whereas a typical “short” analysis post cost between 5 and 25 person-hours, and a typical “in-depth” analysis post cost between 10 and 60 person-hours.

In 2013 we published 16 domain expert interviews between July 1st and December 30th, an average of 2.66 interviews per month. In 2014 I intend to publish 4 or more interviews per month on average.

Second, expository research tends to be more valuable per unit effort than new strategic research. MIRI (in conjunction with our collaborators at FHI) has an uncommonly large backlog of strategic research that has been “completed” but not explained clearly anywhere. Obviously, it takes less effort to explain already-completed strategic research than it takes to conduct original strategic research and then also explain it.

Third, we can prioritize expository (and sometimes strategic) research projects by dialoguing with intelligent critics who are representative of populations we want to influence (e.g. AI researchers, mega-philanthropists) and then preparing the writings most relevant to their concerns. We can then dialogue with them again after they’ve read the new exposition, and see whether that particular objection remains, and if so why, and if not then what other objections remain — which can in turn inform our prioritization of future writings, and also potentially reveal flaws in our models.

Fourthstudents want to know which research projects they could do that would help clarify superintelligence strategy. Unfortunately, experienced professors are not yet knocking down our door to ask us which papers they could research and write to clarify superintelligence strategy, but many graduate students are. Also, I’ve had a few conversations with graduate student advisors who said they have to put lots of time into helping their students find good projects, and that it would be helpful if somebody else prepared research project proposals suitable for their students and their department.

Furthermore, there is some historical precedent for this strategy working, even within the young, narrow domain of superintelligence strategy. The clearest example is that of Nick Beckstead, who wrote a useful philosophy dissertation on the importance of shaping the far future, in part due to conversations with FHI. João Lourenço is currently writing a philosophy dissertation about the prospects for moral enhancement, in part due to conversations with FHI and MIRI. Jeremy Miller is in the early planning stages of a thesis project about universal measures of intelligence, in part due to conversations with MIRI. I think there are other examples, but I haven’t been able to confirm them yet.

So, in 2014 we plan to publish short descriptions of research projects which could inform superintelligence strategy. This will be much easier to do once Nick Bostrom’s Superintelligence book is published, so we’ll probably wait until that happens this summer.

Fifth, Nick Bostrom’s forthcoming scholarly monograph on machine superintelligence provides a unique opportunity to engage more researchers in superintelligence strategy. As such, some of our “outreach to potential strategic researchers” work in 2014 will consist in helping to promote Bostrom’s book. We also plan to release a reading guide for the book, to increase the frequency with which people finish, and benefit from, the book.


  1. Note that what I call “MIRI’s strategic research” or “superintelligence strategy research” is a superintelligence-focused subset of what GiveWell would call “strategic cause selection research” and CEA would call this “cause prioritization research.” 
  2. As usual, we also did significant strategic research in 2013 that is not public-facing (at least not yet), for example 100+ hours of feedback on various drafts of Nick Bostrom’s forthcoming book Superintelligence: Paths, Dangers, Strategies, 15+ hours of feedback on early drafts of Robin Hanson’s forthcoming book about whole brain emulation, and much work on forthcoming MIRI publications. 
  3. Yudkowsky labeled this as “open problem in Friendly AI #1”, but I categorize it as strategic research rather than Friendly AI research. 
  4. At the time of publication, Joshua Fox was a MIRI research associate. 
  5. “Why We Need Friendly AI” was published in an early 2014 issue of the journal Think, but it was released online in 2013. 
  6. The “Friendly Artificial Intelligence” chapter is merely an abridged version of Yudkowsky’s earlier “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” 
  7. These chapters were written during 2011 and 2012, but not published in the book until 2013. 
  8. There were also two very short interviews with Eliezer Yudkowsky: “Yudkowsky on Logical Uncertainty” and “Yudkowsky on ‘What can we do now?’“ 
  9. I have an additional goal for some of our outreach and research activities, which is to address difficult problems in epistemology, because they are more relevant to MIRI’s research than to (e.g.) business or the practice of “normal science” (in the Kuhnian sense). “Pascal’s Muggle” is one example. Also, some of our expository and strategic research doubles as general outreach, e.g. the popular interview with Scott Aaronson. 
  10. Well, we can share some basic web traffic data. According to Google Analytics, the pages (of 2013’s strategic or expository research) with the most “unique pageviews” since they were created are: “When will AI be created?” (~15.5k), the Scott Aaronson interview (~13.5k), the Hadi Esmaeilzadeh interview (~13.5k), “The Robots, AI, and Unemployment Anti-FAQ” (~12k), “What is intelligence?” (~5k), “Pascal’s Muggle” (~5k), “A brief history of ethically concerned scientists” (~4.5k), “Intelligence explosion microeconomics” (~3.5k), and “From philosophy to math to engineering” (~3.5k). Naturally, this list is biased in favor of articles published earlier. Also, Google Analytics doesn’t track PDF downloads, so we don’t have numbers for those. 
  11. E.g. see his statements “Yes, I completely agree with [the ‘Mathematical Proofs Improve…‘ post]” and “I think re-architecting and re-coding things will almost always lead to a win in terms of security, when compared to bolt-on approaches.” 
  12. This last bit is part of my motivation for listening to so many nonfiction audiobooks since September 2013. 
  13. “Intelligence Explosion Microeconomics” enabled “Algorithmic Progress in Six Domains,” but it was still the case that MIRI had to commission “Algorithmic Progress in Domains.”