New paper: “Cheating Death in Damascus”

Posted by & filed under Papers.

MIRI Executive Director Nate Soares and Rutgers/UIUC decision theorist Ben Levinstein have a new paper out introducing functional decision theory (FDT), MIRI’s proposal for a general-purpose decision theory. The paper, titled “Cheating Death in Damascus,” considers a wide range of decision problems. In every case, Soares and Levinstein show that FDT outperforms all earlier theories… Read more »

March 2017 Newsletter

Posted by & filed under Newsletters.

Research updates New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory New at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research MIRI Research Fellow Andrew Critch has started a two-year stint at UC Berkeley’s Center for Human-Compatible… Read more »

February 2017 Newsletter

Posted by & filed under Newsletters.

Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research.   Research updates A new paper: “Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making“ New… Read more »

CHCAI/MIRI research internship in AI safety

Posted by & filed under News.

We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI). About the research: CHCAI is a research center based at UC Berkeley with PIs including Stuart Russell, Pieter Abbeel and Anca Dragan. CHCAI describes its goal as… Read more »

New paper: “Toward negotiable reinforcement learning”

Posted by & filed under Papers.

MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making.” Abstract: Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs… Read more »

January 2017 Newsletter

Posted by & filed under Newsletters.

Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an IEEE recommendations document with a section on artificial general intelligence that we helped draft. Research updates… Read more »

New paper: “Optimal polynomial-time estimators”

Posted by & filed under Papers.

MIRI Research Associate Vadim Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract: The concept of an “approximation algorithm” is usually only applied to optimization problems, since in optimization problems the performance of the algorithm on any given input is a continuous parameter…. Read more »

December 2016 Newsletter

Posted by & filed under Newsletters.

We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up why he’s donating to MIRI this year. (Donation page.) Research updates New at IAFF:… Read more »