Stay Up to Date
Join newsletter subscribers.
Subscribe to our RSS feed.Subscribe
Follow us on social networks.
If AI progress continues uninterrupted, AI systems will eventually surpass humans in intelligence. Most leading AI researchers expect this to happen sometime this century. MIRI’s goal is to get a head start on the technical obstacles to making smarter-than-human AI safe and robust.
We also have an Alignment Research Field Guide for people considering getting started in the field. John Wentworth’s guide for independent researchers is another good reference for people getting started in the field.
MIRI is seeking researchers to do innovative theoretical work in CS, logic, and mathematics, coming up with promising new ideas for how to align advanced AI systems. If you have a strong technical background, consider filling out our general-purpose application form or participating in the AI alignment discussion on the AI Alignment Forum and LessWrong.
Due to the COVID-19 pandemic and our shifts in research priorities and strategy, as of February 2021 we’re doing less hiring than in recent years. However, we’re still continuing to make new hires (at a slower pace), and we continue to be eager to talk to new applicants.
The quickest ways to get involved are:
MIRIx Events: From casual dinner conversations about MIRI papers to multi-day research workshops, MIRIx events are independently-organized MIRI-funded discussion groups. Anyone can become a MIRIx event organizer, and in some cases we’ll be able to help answer questions, lead discussion, or give recommendations regarding MIRIx events.
Forum: The online AI Alignment Research Forum hosts active discussions and unpolished, preliminary results related to long-term AI alignment research. If you’re a new visitor, you’ll be able to start discussions by submitting external links that you have made.
Workshops: We host occasional multi-day math research workshops in the US and Europe. Past workshops have resulted in papers about probabilistic logic, program equilibrium, and obstacles to consistent self-reflection. We’ll notify accepted workshop attendees when relevant workshops are scheduled to take place, which can be many months down the line.
If you are interested in any of the above programs or opportunities, please apply below so we can chat with you about your specific interests and availability.Apply to participate in MIRI’s research
AI Risk for Computer Scientists workshops: We’re running a series of four-day workshops to provide a gathering place for discussion and to introduce some thinking tools for computer scientists interested in reasoning about AI risk. The material at the workshop is a mixture of human rationality content and a variety of topics related to AI risk, including thinking about forecasting, different ideas of where the technical problems are, and various potential research paths.