MIRI Updates

Advise MIRI with Your Domain-Specific Expertise

MIRI currently has a few dozen volunteer advisors on a wide range of subjects, but we need more! If you’d like to help MIRI pursue its mission more efficiently, please sign up to be a MIRI advisor. If you sign...

Five theses, two lemmas, and a couple of strategic implications

MIRI’s primary concern about self-improving AI isn’t so much that it might be created by ‘bad’ actors rather than ‘good’ actors in the global sphere; rather most of our concern is in remedying the situation in which no one knows...

AGI Impact Experts and Friendly AI Experts

MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.” A central strategy for achieving this mission is to find and train what one might call “AGI impact experts” and “Friendly AI experts.” AGI impact...

“Intelligence Explosion Microeconomics” Released

MIRI’s new, 93-page technical report by Eliezer Yudkowsky, “Intelligence Explosion Microeconomics,” has now been released. The report explains one of the open problems of our research program. Here’s the abstract: I. J. Good’s thesis of the ‘intelligence explosion’ is that...

“Singularity Hypotheses” Published

Singularity Hypotheses: A Scientific and Philosophical Assessment has now been published by Springer, in hardcover and ebook forms. The book contains 20 chapters about the prospect of machine superintelligence, including 4 chapters by MIRI researchers and research associates. “Intelligence Explosion:...

Altair’s Timeless Decision Theory Paper Published

During his time as a research fellow for MIRI, Alex Altair wrote a paper on Timeless Decision Theory (TDT) that has now been published: “A Comparison of Decision Algorithms on Newcomblike Problems.” Altair’s paper is both more succinct and also...

Browse
Browse
Subscribe
Follow us on