MIRI Updates
We’ve uploaded a third set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted with the Future of Humanity Institute. These talks were part of the week focused on preference specification in AI systems, including...
We’ve uploaded a second set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. These talks were part of the week focused on robustness and...
This post is a follow-up to Malo’s 2015 review, sketching out our new 2016-2017 plans. Briefly, our top priorities (in decreasing order of importance) are to (1) make technical progress on the research problems we’ve identified, (2) expand our team,...
Research updates A new paper: “Alignment for Advanced Machine Learning Systems.” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda. New at AI...
As previously announced, we recently ran a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Oxford Future of Humanity Institute. The colloquium was aimed at bringing together safety-conscious AI scientists from academia...
As Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update....