MIRI Information


 

 

In 2015, we put together a series of blog posts about our past accomplishments, our plans for the future, and our general research methodology. We also followed up with an overview post in 2016 on “Why AI Safety?”. This page summarizes the main topics we wrote about, with links to fuller discussions:


1. Why AI Safety?

An introduction to AI safety research, highlighting the distinguishing features of our research methodology (in contrast to the methodologies of other research groups) and our goals for the wider field.


2. An Astounding Year

AI risk is becoming a major topic of discussion. After a January conference bringing academic and industry AI leaders together with MIRI and related organizations, we’ve seen the first long-term AI safety grants competition, and the first safety and ethics sessions at AAAI, IJCAI, and NIPS. Meanwhile, MIRI has published its technical agenda and run a series of workshops, allowing us to identify several promising potential hires and an array of open problems for them to work on. This puts us in an excellent position to expand.


3. Four Background Claims

Four core claims underlie MIRI’s concerns about long-term AI risk: that humans are generally intelligent (but present-day AI systems are not), that AI systems could become much more intelligent than humans, that smarter-than-human AI systems would likely have a dominant advantage over humans, and that such systems would not be beneficial by default.


4. MIRI’s Approach

MIRI specializes in technical problems related to making smarter-than-human de novo AI software systems robust and beneficial. Since these problems are large and complex, we attempt to factorize them and ask “What parts of the problem would we still be unable to solve even if the challenge was far simpler?” In the past, building mathematical models in simplified settings has been useful for laying theoretical foundations under research programs decades before practical algorithms were available.


5. When AI Accelerates AI

The previous two posts argued that AI safety is critically important, and that we are likely to be able to make early technical progress. This post lays out a few additional reasons to get started early: plausible scenarios in which rates of AI progress accelerate or are otherwise unpredictable.


6. Assessing Our Past and Potential Impact

How can someone who lacks the technical knowledge to directly assess our research agenda and other recent publications evaluate MIRI’s ability to make important technical progress and grow the field?


7. What Sets MIRI Apart?

What can MIRI do that other teams can’t? Why support us when there are so many larger industry groups doing AI research? In short, because we’re doing foundational AI alignment research that nobody else is doing yet.


8. AI and Effective Altruism

MIRI focuses on long-run AI risk because we consider catastrophic risk scenarios probable, and because we believe the most important and tractable lines of research are currently neglected. This basic perspective has close parallels to ideas promoted by the burgeoning effective altruism community.


 


Other Resources