Get Involved With Our Research
Get started with introductions to MIRI’s research agenda, read about career opportunities, or collaborate with MIRI researchers.
Stay Up to Date
Join newsletter subscribers.
Subscribe to our RSS feed.
SubscribeFollow us on social networks.
Follow @MIRIBerkeley
If AI progress continues uninterrupted, AI systems will eventually surpass humans in intelligence. Most leading AI researchers expect this to happen sometime this century. MIRI’s goal is to get a head start on the technical obstacles to making smarter-than-human AI safe and robust.
For an in-depth introduction to some of the problems we are working on, see our “Agent Foundations” research agenda and “Embedded Agency.”
We also have an Alignment Research Field Guide for people considering getting started in the field. John Wentworth’s guide for independent researchers is another good reference for people getting started in the field.
MIRI is seeking researchers to do innovative theoretical work in CS, logic, and mathematics, coming up with promising new ideas for how to align advanced AI systems. If you have a strong technical background, consider filling out our general-purpose application form or participating in the AI alignment discussion on the AI Alignment Forum and LessWrong.
Due to the COVID-19 pandemic and our shifts in research priorities and strategy, as of February 2021 we’re doing less hiring than in recent years. However, we’re still continuing to make new hires (at a slower pace), and we continue to be eager to talk to new applicants.
Participate in
MIRI’s Research
MIRIx Events
Research Forum
Research Workshops
AI Risk for Computer Scientists Workshops

If you are interested in decision theory, reasoning under uncertainty, or other areas discussed in our research guide, you’re welcome to get in touch with us through our application form.
The quickest ways to get involved are:
MIRIx Events: From casual dinner conversations about MIRI papers to multi-day research workshops, MIRIx events are independently-organized MIRI-funded discussion groups. Anyone can become a MIRIx event organizer, and in some cases we’ll be able to help answer questions, lead discussion, or give recommendations regarding MIRIx events.
Forum: The online AI Alignment Research Forum hosts active discussions and unpolished, preliminary results related to long-term AI alignment research. If you’re a new visitor, you’ll be able to start discussions by submitting external links that you have made.
Workshops: We host occasional multi-day math research workshops in the US and Europe. Past workshops have resulted in papers about probabilistic logic, program equilibrium, and obstacles to consistent self-reflection. We’ll notify accepted workshop attendees when relevant workshops are scheduled to take place, which can be many months down the line.
If you are interested in any of the above programs or opportunities, please apply below so we can chat with you about your specific interests and availability.
Apply to participate in MIRI’s researchAI Risk for Computer Scientists workshops: We’re running a series of four-day workshops to provide a gathering place for discussion and to introduce some thinking tools for computer scientists interested in reasoning about AI risk. The material at the workshop is a mixture of human rationality content and a variety of topics related to AI risk, including thinking about forecasting, different ideas of where the technical problems are, and various potential research paths.
Find out more about AI Risk for Computer Scientists workshops