June 2017 Newsletter June 16, 2017 | Rob Bensinger | Newsletters Research updates A new AI Impacts paper: “When Will AI Exceed Human Performance?” News coverage at Digital Trends and MIT Technology Review. New at IAFF: Cooperative Oracles; Jessica Taylor on the AAMLS Agenda; An Approach to Logically Updateless Decisions Our 2014 technical agenda, “Agent Foundations for Aligning Machine Intelligence with Human Interests,” is now available as a book chapter in the anthology The Technological Singularity: Managing the Journey. General updates readthesequences.com: supporters have put together a web version of Eliezer Yudkowsky’s Rationality: From AI to Zombies. The Oxford Prioritisation Project publishes a model of MIRI’s work as an existential risk intervention. News and links From MIT Technology Review: “Why Google’s CEO Is Excited About Automating Artificial Intelligence.” A new alignment paper from researchers at Australian National University and DeepMind: “Reinforcement Learning with a Corrupted Reward Channel.” New from OpenAI: Baselines, a tool for reproducing reinforcement learning algorithms. The Future of Humanity Institute and Centre for the Future of Intelligence join the Partnership on AI alongside twenty other groups. New AI safety job postings include research roles at the Future of Humanity Institute and the Center for Human-Compatible AI, as well as a UCLA PULSE fellowship for studying AI’s potential large-scale consequences and appropriate preparations and responses.