MIRI Updates
October 2018 Newsletter
The AI Alignment Forum has left beta! Dovetailing with the launch, MIRI researchers Scott Garrabrant and Abram Demski will be releasing a new sequence introducing our research over the coming week, beginning here: Embedded Agents. (Shorter illustrated version here.) Other...
Announcing the new AI Alignment Forum
This is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they’ve put into developing this resource, and our congratulations on today’s launch! I am happy to announce that...
Embedded Agents
[mathjax] Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know. ((This is part...
The Rocket Alignment Problem
The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start. (Somewhere in a not-very-near neighboring world, where science took a very different course…) ALFONSO: Hello, Beth. I’ve noticed a lot of...
September 2018 Newsletter
Summer MIRI Updates: Buck Shlegeris and Ben Weinstein-Raun have joined the MIRI team! Additionally, we ran a successful internship program over the summer, and we’re co-running a new engineer-oriented workshop series with CFAR. On the fundraising side, we received a...
Summer MIRI Updates
In our last major updates—our 2017 strategic update and fundraiser posts—we said that our current focus is on technical research and executing our biggest-ever hiring push. Our supporters responded with an incredible show of support at the end of the...