MIRI Updates

August 2016 Newsletter

Research updates A new paper: “Alignment for Advanced Machine Learning Systems.” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda. New at AI...

2016 summer program recap

As previously announced, we recently ran a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Oxford Future of Humanity Institute. The colloquium was aimed at bringing together safety-conscious AI scientists from academia...

2015 in review

As Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update....

New paper: “Alignment for advanced machine learning systems”

MIRI’s research to date has focused on the problems that we laid out in our late 2014 research agenda, and in particular on formalizing optimal reasoning for bounded, reflective decision-theoretic agents embedded in their environment. Our research team has since...

Submission to the OSTP on AI outcomes

The White House Office of Science and Technology Policy recently put out a request for information on “(1) The legal and governance implications of AI; (2) the use of AI for public good; (3) the safety and control issues for...

July 2016 Newsletter

Research updates A new paper: “A Formal Solution to the Grain of Truth Problem.” The paper was presented at UAI-16, and describes the first general reduction of game-theoretic reasoning to expected utility maximization. Participants in MIRI’s recently-concluded Colloquium Series on...

Browse
Browse
Subscribe
Follow us on