MIRI Updates

In our last strategy update (August 2016), Nate wrote that MIRI’s priorities were to make progress on our agent foundations agenda and begin work on our new “Alignment for Advanced Machine Learning Systems” agenda, to collaborate and communicate with other...

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly support our AI alignment research efforts, with a focus on projects related to machine learning. We’re seeking engineers with strong programming skills who are passionate about...

I recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators’ goals:     The talk was inspired by “AI Alignment: Why It’s Hard, and Where to Start,” and serves as an introduction to the...

Nate Soares’ recent decision theory paper with Ben Levinstein, “Cheating Death in Damascus,” prompted some valuable questions and comments from an acquaintance (anonymized here). I’ve put together edited excerpts from the commenter’s email below, with Nate’s responses. The discussion concerns...

Our newest publication, “Cheating Death in Damascus,” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning. In other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously...

MIRI’s research team is growing! I’m happy to announce that we’ve hired two new research fellows to contribute to our work on AI alignment: Sam Eisenstat and Marcello Herreshoff, both from Google.   Sam Eisenstat studied pure mathematics at the...

Browse
Browse
Subscribe
Follow us on