MIRI Updates
March 2017 Newsletter
Research updates New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory New at AI Impacts: Changes in Funding in the AI Safety Field; Funding of AI Research MIRI...
Using machine learning to address AI risk
At the EA Global 2016 conference, I gave a talk on “Using Machine Learning to Address AI Risk”: It is plausible that future artificial general intelligence systems will share many qualities in common with present-day machine learning systems. If so,...
February 2017 Newsletter
Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research. Research updates...
CHCAI/MIRI research internship in AI safety
We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI). About the research: CHCAI is a research center based at UC Berkeley...
New paper: “Toward negotiable reinforcement learning”
MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making.” Abstract: Existing multi-objective reinforcement learning (MORL) algorithms do not account for...
Response to Cegłowski on superintelligence
Web developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies...