Stay Up to Date
Join newsletter subscribers.
Subscribe to our RSS feed.Subscribe
Follow us on social networks.
If AI progress continues uninterrupted, AI systems will eventually surpass humans in intelligence. Most leading AI researchers expect this to happen sometime this century. MIRI’s goal is to get a head start on the technical obstacles to making smarter-than-human AI safe and robust.
We also have an Alignment Research Field Guide for people considering getting started in the field. John Wentworth’s guide for independent researchers is another good reference for people getting started in the field.
MIRI is seeking researchers to do innovative theoretical work in CS, logic, and mathematics, coming up with promising new ideas for how to align advanced AI systems. If you have a strong technical background, consider filling out our general-purpose application form or participating in the AI alignment discussion on the AI Alignment Forum and LessWrong.
Due to our shifts in research priorities and strategy, as of February 2021 we’re doing less hiring than in recent years. However, we’re still continuing to make new hires (at a slower pace).