Reducing Long-Term Catastrophic Risks from Artificial Intelligence

(PDF Version Available)


In 1965, the eminent statistician I. J. Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind.[3] Good called this process an “intelligence explosion,” while later authors have used the terms “technological singularity” or simply “the Singularity”.[10] [21]

The Machine Intelligence Research Institute aims to reduce the risk of a catastrophe, should such an event eventually occur. Our activities include research, education, and conferences. In this document, we provide a whirlwind introduction to the case for taking AI risks seriously, and suggest some strategies to reduce those risks.

 

What We’re (Not) About

The Machine Intelligence Research Institute is interested in the advent of smart, cross-domain, human-plus-equivalent, self-improving Artificial Intelligence. We do not forecast any particular time when such AI will be developed. We are interested in analyzing points of leverage for increasing the probability that the advent of AI turns out positive. We do not see ourselves as having the job of foretelling that it will go well or poorly – if the outcome were predetermined there would be no point in trying to intervene. We suspect that AI is primarily a software problem which will require new insight, not a hardware problem which will fall to Moore’s Law. We are interested in rational analyses which try to support each point of claimed detail, as opposed to storytelling in which many interesting details are invented but not separately supported.

 

Indifference, Not Malice

Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.[13][14] Superintelligent AIs with real-world traction, such as access to pervasive data networks and autonomous robotics, could radically alter their environment, e.g., by harnessing all available solar, chemical, and nuclear energy. If such AIs found uses for free energy that better furthered their goals than supporting human life, human survival would become unlikely.

Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal.[1][13] For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.[1] Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.

 

An intelligence Explosion May Be Sudden

The pace of an intelligence explosion depends on two conflicting pressures: each improvement in AI technology increases the ability of AIs to research more improvements, while the depletion of low-hanging fruit makes subsequent improvements more difficult. The rate of improvement is hard to estimate, but several factors suggest it would be high. The predominant view in the AI field is that the bottleneck for powerful AI is software, rather than hardware, and continued rapid hardware progress is expected in coming decades.[4] If and when the software is developed, there may thus be a glut of hardware to run many copies of AIs, and to run them at high speeds, amplifying the effects of AI improvements.[8] As we have little reason to expect that human minds are ideally optimized for intelligence, as opposed to being the first intelligences sophisticated enough to produce technological civilization, there is likely to be further low-hanging fruit to pluck (after all, the AI would have been successfully created by a slower and smaller human research community). Given strong enough feedback, or sufficiently abundant hardware, the first AI with humanlike AI research abilities might be able to reach superintelligence rapidly; in particular, more rapidly than researchers and policy-makers can develop adequate safety measures.

 

Is Concern Premature?

The absence of a clear picture of how to build AI means that we cannot assign high confidence to the development of AI in the next several decades. It also makes it difficult to rule out unforeseen advances. Past underestimates of the AI challenge (perhaps most infamously, those made for the 1956 Dartmouth Conference)[12] do not guarantee that AI will never succeed, and we need to take into account both repeated discoveries that the problem is more difficult than expected, and incremental progress in the field. Advances in AI and machine learning algorithms,[17] increasing R&D expenditures by the technology industry, hardware advances that make computation-hungry algorithms feasible,[4] enormous datasets,[5] and insights from neuroscience give advantages that past researchers lacked. Given the size of the stakes and the uncertainty about AI timelines, it seems best to allow for the possibility of medium-term AI development in our safety strategies.

 

Friendly AI

Concern about the risks of future AI technology has led some commentators, such as Sun co-founder Bill Joy, to suggest the global regulation and restriction of such technologies.[9] However, appropriately designed AI could offer similarly enormous benefits. More specifically, human ingenuity is currently a bottleneck in making progress on many key challenges affecting our collective welfare: eradicating diseases, averting long-term nuclear risks, and living richer, more meaningful lives. Safe AI could help enormously in meeting each of these challenges. Further, the prospect of those benefits along with the competitive advantages from AI would make a restrictive global treaty very difficult to enforce.

SIAI’s primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations which are reliably stable under self-improvement, what we call “Friendly AI”.[22]

To very quickly summarize some of the key ideas in Friendly AI:

  1. We can’t make guarantees about the final outcome of an agent’s interaction with the environment, but we may be able to make guarantees about what the agent is trying to do, given its knowledge — we can’t determine that Deep Blue will win against Kasparov just by inspecting Deep Blue, but an inspection might reveal that Deep Blue searches the game tree for winning positions rather than losing ones.
  2. Since code executes on the almost perfectly deterministic environment of a computer chip, we may be able to make very strong guarantees about an agent’s motivations (including how that agent rewrites itself), even though we can’t logically prove the outcomes of environmental strategies. This is important, because if the agent fails on an environmental strategy, it can update its model of the world and try again; but during self-modification, the AI may need to implement a million code changes, one after the other, without any of them being catastrophic.
  3. If Gandhi doesn’t want to kill people, and someone offers Gandhi a pill that will alter his brain to make him want to kill people, and Gandhi knows this is what the pill does, then Gandhi will very likely refuse to take the pill. Most utility functions should be trivially stable under reflection — provided that the AI can correctly project the result of its own self-modifications. Thus, the problem of Friendly AI is not creating an extra conscience module that constrains the AI despite its preferences, but reaching into the enormous design space of possible minds and selecting an AI that prefers to be Friendly.
  4. Human terminal values are extremely complicated, although this complexity is not introspectively visible at a glance, for much the same reason that major progress in computer vision was once thought to be a summer’s work. Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.
  5. The explicit moral values of human civilization have changed over time, and we regard this change as progress, and extrapolate that progress may continue in the future. An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery. Static moral values are clearly undesirable, but most random changes to values will be even less desirable — every improvement is a change, but not every change is an improvement. Possible bootstrapping algorithms include “do what we would have told you to do if we knew everything you knew,” “do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument,” and “do what we would tell you to do if we had your ability to reflect on and modify ourselves.” In moral philosophy, this notion of moral progress is known as reflective equilibrium.[15]

 

Seeding Research Programs

As we get closer to advanced AI, it will be easier to learn how to reduce risks effectively. The interventions to focus on today are those whose benefits will compound over time: lines of research that can guide other choices or that entail much incremental work. Some possibilities include:

Friendly AI: Theoretical computer scientists can investigate AI architectures that self-modify while retaining stable goals. Theoretical toy systems exist now: Gödel machines make provably optimal self-improvements given certain assumptions [19]. Decision theories are being proposed that aim to be stable under self-modification.[2] These models can be extended incrementally into less idealized contexts.

Stable brain emulations: One conjectured route to safe AI starts with human brain emulation. Neuroscientists can investigate the possibility of emulating the brains of individual humans with known motivations, while evolutionary theorists can investigate methods to prevent dangerous evolutionary dynamics and social scientists can investigate social or legal frameworks to channel the impact of emulations in positive directions.[18]

Models of AI risks: Researchers can build models of AI risks and of AI growth trajectories, using tools from game theory, evolutionary analysis, computer security, or economics.[1][6][8][14][22] If such analysis is done rigorously it can help to channel the efforts of scientists, graduate students, and funding agencies to the areas with the greatest potential benefits.

Institutional improvements: Major technological risks are ultimately navigated by society as a whole: success requires that society understand and respond to scientific evidence. Knowledge of the biases that distort human thinking around catastrophic risks,[23] improved methods for probabilistic forecasting,[16] or risk analysis,[11] and methods for identifying and aggregating expert opinions[7] can all improve our collective odds. So can methods for international cooperation around AI development, and for avoiding an “AI arms race” that might be won by the competitor most willing to trade off safety measures for speed.[20]

 

Our Aims

We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling.

We have groundwork already. We have: (a) seed research about catastrophic AI risks and AI safety technologies; (b) human capital; and (c) programs that engage outside research talent, including our annual Singularity Summits and our Visiting Fellows program.

Going forward, we plan to continue our recent growth by scaling up our visiting fellows program, extending the Singularity Summits and similar academic networking, and writing further papers to seed the above research programs, in-house or with the best outside talent we can find. We welcome potential co-authors, Visiting Fellows, and other collaborators, as well as any suggestions or cost-benefit analyses on how to reduce catastrophic AI risk.

 

The Upside and Downside of Artificial Intelligence

Human intelligence is the most powerful known biological technology, with a discontinuous impact upon the planet compared to past organisms. But our place in history probably rests, not on our being the smartest intelligences that could exist, but rather, on being the first intelligences that did exist. We probably are to intelligence what the first replicator was to biology. The first single-stranded RNA capable of copying itself was nowhere near being an ultra-sophisticated replicator — but it still had an important place in history, due to being first.

The future of intelligence is — hopefully — very much greater than its past. The origin and shape of human intelligence may end up playing a critical role in the origin and shape of future civilizations on a much larger scale than one planet. And the origin and shape of the first self-improving Artificial Intelligences humanity builds, may have a similarly strong impact, for similar reasons. It is the values of future intelligence that will shape future civilization. What stands to be won or lost is the values of future intelligences, and thus the value of future civilization.

 

Recommended Reading

This has been a very quick introduction. For more information, please contact anna@intelligence.org, or see:

  • For a general overview of AI catastrophic risks: Yudkowsky, Eliezer (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk” In Bostrom, Nick and Cirkovic, Milan M. (eds.), Global Catastrophic Risks, pp. 308-345 (Oxford: Oxford University Press).
  • For discussion of self-modifying systems’ tendency to approximate optimizers and fully exploit scarce resources: Omohundro, Stephen M. (2008). “The Basic AI Drives” In Pei Wang et al. (eds.), Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pp. 483-492 (Amsterdam: IOS Press).
  • For discussion of evolutionary pressures toward software minds aimed solely at reproduction: Bostrom, Nick. “The Future of Human Evolution” (2004). In Tandy, Charles (ed.), Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, pp. 339-371 (Palo Alto, CA: Ria University Press).
  • For tools for doing cost-benefit analysis on human extinction risks, and a discussion of gaps in the current literature: Matheny, Jason G., “Reducing the Risk of Human Extinction”, Risk Analysis, Volume 27 Issue 5, pp. 1335-1344, 2007.
  • For an overview of potential causes of human extinction, including AI: Bostrom, Nick. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards” (2002). Journal of Evolution and Technology, Vol. 9.
  • For an overview of the ethical problems and implications involved in creating a superintelligent AI: Bostrom, Nick. “Ethical Issues in Advanced Artificial Intelligence”, Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17.

 

References

  1. Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, p. 339-371, 2004, Ria University Press.
  2. Drescher, Gary, Good and Real: Demystifying Paradoxes from Physics to Ethics, pp. 188, The MIT Press, 2006.
  3. Good, I. J., “Speculations Concerning the First Ultraintelligent Machine“, Franz L. Alt and Morris Rubinoff, ed., Advances in Computers (Academic Press) 6: 31-88, 1965.
  4. International Technology Roadmap for Semiconductors, “International Technology Roadmap for Semiconductors, 2007 Edition” 2007. Web. 07 Jan. 2010.
  5. Halevy, Alon, Peter Norvig, and Fernando Pereira, “The Unreasonable Effectiveness of Data“, IEEE Intelligent Systems, March/April 2009, pp. 8-12
  6. Hall, J. Storrs, Beyond AI: creating the conscience of the machine. Amherst, N.Y: Prometheus, 2007. Print.
  7. Hanson, Robin, “Idea Futures” George Mason University, 12 June 1996. Web. 08 Jan. 2010.
  8. Hanson, Robin, “Economic Growth Given Machine Intelligence” George Mason University, 1998. Web. 7 Jan. 2010.
  9. Joy, Bill, “Why the Future Doesn’t Need Us“, Wired Magazine, 2000.
  10. Kurzweil, Ray, The Singularity is Near: When Humans Transcend Biology. Viking Penguin, 2005.
  11. Matheny, Jason G., “Reducing the Risk of Human Extinction“, Risk Analysis, Volume 27 Issue 5, pp. 1335-1344, 2007.
  12. McCarthy, John, Marvin Minsky, Nathan Rochester, and Claude Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” Formal Reasoning Group Stanford University, 31 Aug. 1955. Web. 07 Jan. 2010.
  13. Omohundro, Stephen M., “The Basic AI Drives” Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.
  14. Omohundro, Stephen M., “The Nature of Self-Improving Artificial Intelligence” Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.
  15. Rawls, John, A Theory of Justice. New York: Belknap, 2005.
  16. Rayhawk, Steve, Anna Salamon, Tom McCabe, Michael Anissimov, and Rolf Nelson, “Changing the frame of AI futurism: From story-telling to heavy-tailed, high-dimensional probability distributions” Proceedings of the European Conference on Computing and Philosophy. Universitat Autonoma de Barcelona, Barcelona, Spain. 4 July 2009.
  17. Russell, Stuart J. & Norvig, Peter, Artificial Intelligence: A Modern Approach, 2nd ed., Pearson Education, 2003.
  18. Sandberg, Anders & Bostrom, Nick, “Whole Brain Emulation: A Roadmap“, Technical Report #2008-3, Future of Humanity Institute, Oxford University, 2008.
  19. Schmidhuber, Juergen, “Godel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements“, Adaptive Agents and Multi-Agent Systems II, LNCS 3394, p. 1-23, Springer, 2005.
  20. Shulman, Carl M., “Arms Control and Intelligence Explosions” Proceedings of the European Conference on Computing and Philosophy. Universitat Autonoma de Barcelona, Barcelona, Spain. 4 July 2009.
  21. Vinge, Vernor, “The Coming Technological Singularity“, Whole Earth Review, New Whole Earth LLC, March 1993
  22. Yudkowsky, Eliezer, “Artificial Intelligence as a Positive and Negative Factor in Global Risk“, Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic, 2008, pp. 308-345.
  23. Yudkowsky, Eliezer, “Cognitive Biases Affecting Judgement of Existential Risk“, Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic, 2008, pp. 91-119.