April 2017 Newsletter

Posted by & filed under Newsletters.

Our newest publication, “Cheating Death in Damascus,” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning. In other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month. Research updates New at IAFF: “Formal Open Problem in Decision… Read more »

Two new researchers join MIRI

Posted by & filed under News.

MIRI’s research team is growing! I’m happy to announce that we’ve hired two new research fellows to contribute to our work on AI alignment: Sam Eisenstat and Marcello Herreshoff, both from Google.   Sam Eisenstat studied pure mathematics at the University of Waterloo, where he carried out research in mathematical logic. His previous work was on… Read more »

New paper: “Cheating Death in Damascus”

Posted by & filed under Papers.

MIRI Executive Director Nate Soares and Rutgers/UIUC decision theorist Ben Levinstein have a new paper out introducing functional decision theory (FDT), MIRI’s proposal for a general-purpose decision theory. The paper, titled “Cheating Death in Damascus,” considers a wide range of decision problems. In every case, Soares and Levinstein show that FDT outperforms all earlier theories… Read more »

March 2017 Newsletter

Posted by & filed under Newsletters.

Research updates New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory New at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research MIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley’s Center for Human-Compatible… Read more »

February 2017 Newsletter

Posted by & filed under Newsletters.

Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research.   Research updates A new paper: “Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making“ New… Read more »

CHCAI/MIRI research internship in AI safety

Posted by & filed under News.

We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI). About the research: CHCAI is a research center based at UC Berkeley with PIs including Stuart Russell, Pieter Abbeel and Anca Dragan. CHCAI describes its goal as… Read more »

New paper: “Toward negotiable reinforcement learning”

Posted by & filed under Papers.

MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making.” Abstract: Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs… Read more »

January 2017 Newsletter

Posted by & filed under Newsletters.

Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an IEEE recommendations document with a section on artificial general intelligence that we helped draft. Research updates… Read more »