The AI industry is racing toward a precipice.
The default consequence of the creation of artificial superintelligence (ASI) is human extinction.
Our survival depends on delaying the creation of ASI, as soon as we can, for as long as necessary.
For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence. Our technical and philosophical work helped found the field of AI alignment, and our researchers originated many of the theories and concepts central to today’s discussions of AI.
Our view
Recent rapid advances in the frontier of artificial intelligence have dramatically shortened estimates of when superintelligence will arrive. Technical progress on safety, alignment, and control has failed to keep up. Humanity does not understand the internal workings of present systems well enough to completely control them or robustly steer their behaviors, let alone the far more powerful and complex systems expected in the coming years.
If ASI is developed and deployed any time soon, by any nation or group, via anything remotely resembling current methods, the most likely outcome is human extinction.
This is a bold claim, and we do not make it lightly. Many of the world’s experts (including some of those within organizations at the cutting edge) estimate the risk of disaster to be greater than 50%. Yet progress toward ASI continues, at a breakneck pace. The industry is racing forward; those actually building the systems of tomorrow cannot be relied upon to stop in time.
Our survival depends on some form of globally coordinated and collectively enforced moratorium on the development of ASI, as soon as we can, for as long as necessary.