Stay Up to Date
Join newsletter subscribers.
Subscribe to our RSS feed.Subscribe
Follow us on social networks.
If AI progress continues uninterrupted, AI systems will eventually surpass humans in intelligence. Most leading AI researchers expect this to happen sometime this century. MIRI’s goal is to get a head start on the technical obstacles to making smarter-than-human AI safe and robust.
We also have an Alignment Research Field Guide for people considering getting started in the field. John Wentworth’s guide for independent researchers is another good reference for people getting started in the field.
MIRI is seeking researchers to do innovative theoretical work in CS, logic, and mathematics, coming up with promising new ideas for how to align advanced AI systems. If you have a strong technical background, consider participating in the AI alignment discussion on the AI Alignment Forum and LessWrong.
We are also looking to expand our Communications and Operations teams.
For a complete list of all current open positions, visit the Careers page.