Stay Up to Date
Join newsletter subscribers.
Subscribe to our RSS feed.Subscribe
Follow us on social networks.
If AI progress continues uninterrupted, AI systems will eventually surpass humans in intelligence. Most leading AI researchers expect this to happen sometime this century. MIRI’s goal is to get a head start on the technical obstacles to making smarter-than-human AI safe and robust.
We’re hiring! MIRI is looking for software engineers to directly contribute to our work on the AI alignment problem. We’re seeking exceptional programmers of all backgrounds who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work.
Engineering roles at MIRI involve working closely with our researchers to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas.
We’re also seeking researchers to do innovative theoretical work in CS, logic, and mathematics, coming up with promising new ideas for how to align advanced AI systems. If you have a track record of high-quality technical research, consider filling out our general-purpose application form or submitting links to the AI Alignment Forum.
The quickest ways to get involved are:
MIRIx Events: From casual dinner conversations about MIRI papers to multi-day research workshops, MIRIx events are independently-organized MIRI-funded discussion groups. Anyone can become a MIRIx event organizer, and in some cases we’ll be able to help answer questions, lead discussion, or give recommendations regarding MIRIx events.
Forum: The online Agent Foundations Research Forum hosts active discussions and unpolished, preliminary results related to long-term AI alignment research. If you’re a new visitor, you’ll be able to start discussions by submitting external links that you have made.
Workshops: We host occasional multi-day math research workshops in the US and Europe. Past workshops have resulted in papers about probabilistic logic, program equilibrium, and obstacles to consistent self-reflection. We’ll notify accepted workshop attendees when relevant workshops are scheduled to take place, which can be many months down the line.
If you are interested in any of the above programs or opportunities, please apply below so we can chat with you about your specific interests and availability.Apply to participate in MIRI’s research
AI Risk for Computer Scientists Workshops: We’re running a series of four-day workshops to provide a gathering place for discussion and to introduce some thinking tools for computer scientists interested in reasoning about AI risk. The material at the workshop is a mixture of human rationality content and a variety of topics related to AI risk, including thinking about forecasting, different ideas of where the technical problems are, and various potential research paths.