Recent and Forthcoming Papers
Sign up to get updates on new MIRI technical results
Get notified every time a new technical paper is published.
Eliezer Yudkowsky (2017)
When should you think that you may be able to do something unusually well? When you’re trying to outperform in a given area, it’s important that you have a sober understanding of your relative competencies. The story only ends there, however, if you’re fortunate enough to live in an adequate civilization.
Eliezer Yudkowsky’s Inadequate Equilibria is a sharp and lively guidebook for anyone questioning when and how they can know better, and do better, than the status quo. Freely mixing debates on the foundations of rational decision-making with tips for everyday life, Yudkowsky explores the central question of when we can (and can’t) expect to spot systemic inefficiencies, and exploit them.
Eliezer Yudkowsky (2015)
What does it actually mean to be rational? Not Hollywood-style “rational,” where one rejects all human feeling to embrace Cold Hard Logic — real rationality, of the sort studied by psychologists, social scientists, and mathematicians. The kind of rationality where you make good decisions, even when it’s hard; where you reason well, even in the face of massive uncertainty.
In Rationality: From AI to Zombies, Eliezer Yudkowsky explains the findings of cognitive science, and the ideas of naturalistic philosophy, which help to motivate why MIRI’s research program exists.
Stuart Armstrong (2014)
What happens when machines become smarter than humans? Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Luke Muehlhauser (2013)
Sometime this century, machines will surpass human levels of intelligence and ability. This event—the “intelligence explosion”—will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
Luminaries from Alan Turing and I. J. Good to Bill Joy and Stephen Hawking have warned us about this. Why do we think Hawking and company are right, and what can we do about it?
Facing the Intelligence Explosion is Muehlhauser’s attempt to answer these questions.
Robin Hanson and Eliezer Yudkowsky (2013)
In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.
The original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.