December 2015 Newsletter December 3, 2015 | Rob Bensinger | Newsletters Research updates New papers: “Formalizing Convergent Instrumental Goals” and “Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.” These papers have been accepted to the AAAI-16 workshop on AI, Ethics and Society. New at AI Impacts: Recently at AI Impacts New at IAFF: A First Look at the Hard Problem of Corrigibility; Superrationality in Arbitrary Games; A Limit-Computable, Self-Reflective Distribution; Reflective Oracles and Superrationality: Prisoner’s Dilemma Scott Garrabrant joins MIRI’s full-time research team this month. General updates Our Winter Fundraiser is now live, and includes details on where we’ve been directing our research efforts in 2015, as well as our plans for 2016. The fundraiser will conclude on December 31. A 2014 collaboration between MIRI and the Oxford-based Future of Humanity Institute (FHI), “The Errors, Insights, and Lessons of Famous AI Predictions,” is being republished next week in the anthology Risks of Artificial Intelligence. Also included will be Daniel Dewey’s important strategic analysis “Long-Term Strategies for Ending Existential Risk from Fast Takeoff” and articles by MIRI Research Advisors Steve Omohundro and Roman Yampolskiy. We recently spent an enjoyable week in the UK comparing notes, sharing research, and trading ideas with FHI. During our visit, MIRI researcher Andrew Critch led a “Big-Picture Thinking” seminar on long-term AI safety (video). News and links In collaboration with Oxford, UC Berkeley, and Imperial College London, Cambridge University is launching a new $15 million research center to study AI’s long-term impact: the Leverhulme Centre for the Future of Intelligence. The Strategic Artificial Intelligence Research Centre, a new joint initiative between FHI and the Cambridge Centre for the Study of Existential Risk, is accepting applications for three research positions between now and January 6: research fellows in machine learning and the control problem, in policy work and emerging technology governance, and in general AI strategy. FHI is additionally seeking a research fellow to study AI risk and ethics. (Full announcement.) FHI founder Nick Bostrom makes Foreign Policy‘s Top 100 Global Thinkers list. Bostrom (link), IJCAI President Francesca Rossi (link), and Vicarious co-founder Dileep George (link) weigh in on AI safety in a Washington Post series. Future of Life Institute co-founder Viktoriya Krakovna discusses risks from general AI without an intelligence explosion.