MIRI Updates

Announcing the new AI Alignment Forum

This is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they’ve put into developing this resource, and our congratulations on today’s launch! I am happy to announce that...

Embedded Agents

  [mathjax] Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know. ((This is part...

The Rocket Alignment Problem

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (Somewhere in a not-very-near neighboring world, where science took a very different course…)   ALFONSO:  Hello, Beth. I’ve noticed a lot of...

September 2018 Newsletter

Summer MIRI Updates: Buck Shlegeris and Ben Weinstein-Raun have joined the MIRI team! Additionally, we ran a successful internship program over the summer, and we’re co-running a new engineer-oriented workshop series with CFAR. On the fundraising side, we received a...

Summer MIRI Updates

In our last major updates—our 2017 strategic update and fundraiser posts—we said that our current focus is on technical research and executing our biggest-ever hiring push. Our supporters responded with an incredible show of support at the end of the...

August 2018 Newsletter

Updates New posts to the new AI Alignment Forum: Buridan’s Ass in Coordination Games; Probability is Real, and Value is Complex; Safely and Usefully Spectating on AIs Optimizing Over Toy Worlds MIRI Research Associate Vanessa Kosoy wins a $7500 AI...

Browse
Browse
Subscribe
Follow us on