In 2015, we put together a series of blog posts about our past accomplishments, our plans for the future, and our general research methodology. We also followed up with an overview post in 2016 on “Why AI Safety?”. This page summarizes the main topics we wrote about, with links to fuller discussions:
An introduction to AI safety research, highlighting the distinguishing features of our research methodology (in contrast to the methodologies of other research groups) and our goals for the wider field.
AI risk is becoming a major topic of discussion. After a January conference bringing academic and industry AI leaders together with MIRI and related organizations, we’ve seen the first long-term AI safety grants competition, and the first safety and ethics sessions at AAAI, IJCAI, and NIPS. Meanwhile, MIRI has published its technical agenda and run a series of workshops, allowing us to identify several promising potential hires and an array of open problems for them to work on. This puts us in an excellent position to expand.
Four core claims underlie MIRI’s concerns about long-term AI risk: that humans are generally intelligent (but present-day AI systems are not), that AI systems could become much more intelligent than humans, that smarter-than-human AI systems would likely have a dominant advantage over humans, and that such systems would not be beneficial by default.
MIRI specializes in technical problems related to making smarter-than-human de novo AI software systems robust and beneficial. Since these problems are large and complex, we attempt to factorize them and ask “What parts of the problem would we still be unable to solve even if the challenge was far simpler?” In the past, building mathematical models in simplified settings has been useful for laying theoretical foundations under research programs decades before practical algorithms were available.
The previous two posts argued that AI safety is critically important, and that we are likely to be able to make early technical progress. This post lays out a few additional reasons to get started early: plausible scenarios in which rates of AI progress accelerate or are otherwise unpredictable.
How can someone who lacks the technical knowledge to directly assess our research agenda and other recent publications evaluate MIRI’s ability to make important technical progress and grow the field?
What can MIRI do that other teams can’t? Why support us when there are so many larger industry groups doing AI research? In short, because we’re doing foundational AI alignment research that nobody else is doing yet.
MIRI focuses on long-run AI risk because we consider catastrophic risk scenarios probable, and because we believe the most important and tractable lines of research are currently neglected. This basic perspective has close parallels to ideas promoted by the burgeoning effective altruism community.
- About MIRI
- Frequently Asked Questions
- Using Machine Learning to Address AI Risk
- Our Agent Foundations Technical Agenda and A Guide to Our Research Program