MIRI Updates

June 2017 Newsletter

Research updates A new AI Impacts paper: “When Will AI Exceed Human Performance?” News coverage at Digital Trends and MIT Technology Review. New at IAFF: Cooperative Oracles; Jessica Taylor on the AAMLS Agenda; An Approach to Logically Updateless Decisions Our...

May 2017 Newsletter

Research updates New at IAFF: The Ubiquitous Converse Lawvere Problem; Two Major Obstacles for Logical Inductor Decision Theory; Generalizing Foundations of Decision Theory II. New at AI Impacts: Guide to Pages on AI Timeline Predictions “Decisions Are For Making Bad...

2017 Updates and Strategy

In our last strategy update (August 2016), Nate wrote that MIRI’s priorities were to make progress on our agent foundations agenda and begin work on our new “Alignment for Advanced Machine Learning Systems” agenda, to collaborate and communicate with other...

Software Engineer Internship / Staff Openings

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly support our AI alignment research efforts, with a focus on projects related to machine learning. We’re seeking engineers with strong programming skills who are passionate about...

Ensuring smarter-than-human intelligence has a positive outcome

I recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators’ goals:     The talk was inspired by “AI Alignment: Why It’s Hard, and Where to Start,” and serves as an introduction to the...

Decisions are for making bad outcomes inconsistent

Nate Soares’ recent decision theory paper with Ben Levinstein, “Cheating Death in Damascus,” prompted some valuable questions and comments from an acquaintance (anonymized here). I’ve put together edited excerpts from the commenter’s email below, with Nate’s responses. The discussion concerns...

Browse
Browse
Subscribe
Follow us on