I wrote a short profile of MIRI for a forthcoming book on effective altruism. It leaves out many important details, but hits many of the key points pretty succinctly:
The Machine Intelligence Research Institute (MIRI) was founded in 2000 on the premise that creating smarter-than-human artificial intelligence with a positive impact — “Friendly AI” — might be a particularly efficient way to do as much good as possible.
First, because future people vastly outnumber presently existing people, we think that “From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” (See Nick Beckstead’s On the Overwhelming Importance of Shaping the Far Future.)
Second, as an empirical matter, we think that smarter-than-human AI is humanity’s most significant point of leverage on that “general trajectory along which our descendants develop.” If we handle advanced AI wisely, it could produce tremendous goods which endure for billions of years. If we handle advanced AI poorly, it could render humanity extinct. No other future development has more upside or downside. (See Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies.)
Third, we think that Friendly AI research is tractable, urgent, and uncrowded.
Tractable: Our staff researchers and visiting workshop participants tackle open problems in Friendly AI theory, such as: How can we get an AI to preserve its original goals even as it learns new things and modifies its own code? How do we load desirable goals into a self-modifying AI? How do we ensure that advanced AIs will cooperate with each other and with modified versions of themselves? This work is currently at a theoretical stage, but we are making clear conceptual progress, and growing a new community of researchers devoted to solving these problems.
Urgent: Surveys of AI scientists, as well as our own estimates, expect the invention of smarter-than-human AI in the 2nd half of the 21st century if not sooner. Unfortunately, mathematical challenges such as those we need to solve to build Friendly AI often require several decades of research to overcome, with each new result building on the advances that came before. Moreover, because the invention of smarter-than-human AI is so difficult to predict, it may arrive with surprising swiftness, leaving us with little time to prepare.
Uncrowded: Very few researchers, perhaps fewer than five worldwide, are explicitly devoted to full-time Friendly AI research.
The overwhelming power of machine superintelligence will reshape our world, dominating other causal factors. Our intended altruistic effects on the vast majority of beings who will ever live must largely reach them via the technical design of the first self-improving smarter-than-human AIs. Many ongoing efforts — on behalf of better altruism, better reasoning, better global coordination, etc. — will play a role in this story, but we think it is crucial to also directly address the core challenge: the design of stably self-improving AIs with desirable goals. Failing to solve that problem will render humanity’s other efforts moot.
If our mission appeals to you, you can either fund our research or get involved in other ways.