2013 in Review: Strategic and Expository Research
This is the 3rd part of my personal and qualitative self-review of MIRI in 2013, in which I begin to review MIRI’s 2013 research activities. By “research activities” I mean to include outreach efforts primarily aimed at researchers, and also three types of research performed by MIRI:
- Expository research aims to consolidate and clarify already-completed strategic research or Friendly AI research that hasn’t yet been explained with sufficient clarity or succinctness, e.g. “Intelligence Explosion: Evidence and Import” and “Robust Cooperation: A Case Study in Friendly AI Research.” (I consider this a form of “research” because it often requires significant research work to explain ideas clearly, cite relevant sources, etc.)
- Strategic research aims to clarify how the future is likely to unfold, and what we can do now to nudge the future toward good outcomes, and involves more novel thought and modeling than expository research — though, the distinction is fuzzy. See e.g. “Intelligence Explosion Microeconomics” and “How We’re Predicting AI — or Failing to.”1
- Friendly AI research aims to solve the technical sub-problems that seem most relevant to the challenge of designing a stably self-improving artificial intelligence with humane values. This often involves sharpening philosophical problems into math problems, and then developing the math problems into engineering problems. See e.g. “Tiling Agents for Self-Modifying AI” and “Robust Cooperation in the Prisoner’s Dilemma.”
I’ll review MIRI’s strategic and expository research in this post; my review of MIRI’s 2013 Friendly AI research will appear in a future post. For the rest of this post, I usually won’t try to distinguish which writings are “expository” vs. “strategic” research, since most of them are partially of both kinds.
- Note that what I call “MIRI’s strategic research” or “superintelligence strategy research” is a superintelligence-focused subset of what GiveWell would call “strategic cause selection research” and CEA would call this “cause prioritization research.” ↩