June 2016 Newsletter

Posted by & filed under Newsletters.

Research updates New paper: “Safely Interruptible Agents.” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s press release. The paper has received (often hyperbolic) coverage from a number of press outlets, including Business… Read more »

New paper: “Safely interruptible agents”

Posted by & filed under Papers.

Google DeepMind Research Scientist Laurent Orseau and MIRI Research Associate Stuart Armstrong have written a new paper on error-tolerant agent designs, “Safely interruptible agents.” The paper is forthcoming at the 32nd Conference on Uncertainty in Artificial Intelligence. Abstract: Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally… Read more »

May 2016 Newsletter

Posted by & filed under Newsletters.

Research updates Two new papers split logical uncertainty into two distinct subproblems: “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.” New at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time Hierarchy Theorems for Distributional Estimation Problems We will be presenting “The Value Learning Problem” at… Read more »

April 2016 Newsletter

Posted by & filed under Newsletters.

Research updates A new paper: “Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents“ New at IAFF: What Does it Mean for Correct Operation to Rely on Transfer Learning?; Virtual Models of Virtual AIs in Virtual Worlds General updates We’re currently accepting applicants to two programs we’re running in June: our 2016 Summer Fellows… Read more »

New paper on bounded Löb and robust cooperation of bounded agents

Posted by & filed under Papers.

MIRI Research Fellow Andrew Critch has written a new paper on cooperation between software agents in the Prisoner’s Dilemma, available on arXiv: “Parametric bounded Löb’s theorem and robust cooperation of bounded agents.” The abstract reads: Löb’s theorem and Gödel’s theorem make predictions about the behavior of systems capable of self-reference with unbounded computational resources with… Read more »

Announcing a new colloquium series and fellows program

Posted by & filed under News.

The Machine Intelligence Research Institute is accepting applicants to two summer programs: a three-week AI robustness and reliability colloquium series (co-run with the Oxford Future of Humanity Institute), and a two-week fellows program focused on helping new researchers contribute to MIRI’s technical agenda (co-run with the Center for Applied Rationality). The Colloquium Series on Robust… Read more »

Seeking Research Fellows in Type Theory and Machine Self-Reference

Posted by & filed under News.

The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive… Read more »

March 2016 Newsletter

Posted by & filed under Newsletters.

Research updates A new paper: “Defining Human Values for Value Learners“ New at IAFF: Analysis of Algorithms and Partial Algorithms; Naturalistic Logical Updates; Notes from a Conversation on Act-Based and Goal-Directed Systems; Toy Model: Convergent Instrumental Goals New at AI Impacts: Global Computing Capacity A revised version of “The Value Learning Problem” (pdf) has been… Read more »