Recent and Forthcoming Papers
Sign up to get updates on new MIRI technical results
Get notified every time a new technical paper is published.
E Yudkowsky (2017)
When should you think that you may be able to do something unusually well? When you’re trying to outperform in a given area, it’s important that you have a sober understanding of your relative competencies. The story only ends there, however, if you’re fortunate enough to live in an adequate civilization.
Eliezer Yudkowsky’s Inadequate Equilibria is a sharp and lively guidebook for anyone questioning when and how they can know better, and do better, than the status quo. Freely mixing debates on the foundations of rational decision-making with tips for everyday life, Yudkowsky explores the central question of when we can (and can’t) expect to spot systemic inefficiencies, and exploit them.
E Yudkowsky (2015)
When human brains try to do things, they can run into some very strange problems. Self-deception, confirmation bias, magical thinking—it sometimes seems our ingenuity is boundless when it comes to shooting ourselves in the foot.
Map and Territory and the rest of the Rationality: From AI to Zombies series asks what a “martial art” of rationality would look like. In this series, Eliezer Yudkowsky explains the findings of cognitive science, and the ideas of naturalistic philosophy, that help provide a useful background for understanding MIRI’s research and for generally approaching ambitious problems.
S Armstrong (2014)
What happens when machines become smarter than humans? Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
L Muehlhauser (2013)
Sometime this century, machines will surpass human levels of intelligence and ability. This event—the “intelligence explosion”—will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
Luminaries from Alan Turing and I. J. Good to Bill Joy and Stephen Hawking have warned us about this. Why do we think Hawking and company are right, and what can we do about it?
Facing the Intelligence Explosion is Muehlhauser’s attempt to answer these questions.
R Hanson and E Yudkowsky (2013)
In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.
The original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.