MIRI Updates
February 2017 Newsletter
Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research. Research updates...
CHCAI/MIRI research internship in AI safety
We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute (MIRI). About the research: CHCAI is a research center based at UC Berkeley...
New paper: “Toward negotiable reinforcement learning”
MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making.” Abstract: Existing multi-objective reinforcement learning (MORL) algorithms do not account for...
Response to Cegłowski on superintelligence
Web developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies...
January 2017 Newsletter
Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an...
New paper: “Optimal polynomial-time estimators”
MIRI Research Associate Vanessa Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract: The concept of an “approximation algorithm” is usually only applied to optimization problems, since in...