MIRI Updates

New paper: “Alignment for advanced machine learning systems”

MIRI’s research to date has focused on the problems that we laid out in our late 2014 research agenda, and in particular on formalizing optimal reasoning for bounded, reflective decision-theoretic agents embedded in their environment. Our research team has since...

Submission to the OSTP on AI outcomes

The White House Office of Science and Technology Policy recently put out a request for information on “(1) The legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for...

July 2016 Newsletter

Research updates A new paper: “A Formal Solution to the Grain of Truth Problem.” The paper was presented at UAI-16, and describes the first general reduction of game-theoretic reasoning to expected utility maximization. Participants in MIRI’s recently-concluded Colloquium Series on...

New paper: “A formal solution to the grain of truth problem”

Future of Humanity Institute Research Fellow Jan Leike and MIRI Research Fellows Jessica Taylor and Benya Fallenstein have just presented new results at UAI 2016 that resolve a longstanding open problem in game theory: “A formal solution to the grain...

June 2016 Newsletter

Research updates New paper: “Safely Interruptible Agents.” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s press release....

New paper: “Safely interruptible agents”

Google DeepMind Research Scientist Laurent Orseau and MIRI Research Associate Stuart Armstrong have written a new paper on error-tolerant agent designs, “Safely interruptible agents.” The paper is forthcoming at the 32nd Conference on Uncertainty in Artificial Intelligence. Abstract: Reinforcement learning...

Browse
Browse
Subscribe
Follow us on