Our 2017 fundraiser was a huge success, with 341 donors contributing a total of $2.5 million!
Some of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a matching challenge), and the Berkeley Existential Risk Initiative. Thank you to everyone who contributed!
Research updates
- The winners of the first AI Alignment Prize include Scott Garrabrant’s Goodhart Taxonomy and recent IAFF posts: Vanessa Kosoy’s Why Delegative RL Doesn’t Work for Arbitrary Environments and More Precise Regret Bound for DRL, and Alex Mennen’s Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems and Learning Goals of Simple Agents.
- New at AI Impacts: Human-Level Hardware Timeline; Effect of Marginal Hardware on Artificial General Intelligence
- We’re hiring for a new position at MIRI: ML Living Library, a specialist on the newest developments in machine learning.
General updates
- From Eliezer Yudkowsky: A Reply to Francois Chollet on Intelligence Explosion.
- Counterterrorism experts Richard Clarke and R. P. Eddy profile Yudkowsky in their new book Warnings: Finding Cassandras to Stop Catastrophes.
- There have been several recent blog posts recommending MIRI as a donation target: from Ben Hoskin, Zvi Mowshowitz, Putanumonit, and the Open Philanthropy Project’s Daniel Dewey and Nick Beckstead.
News and links
- A generalization of the AlphaGo algorithm, AlphaZero, achieves rapid superhuman performance on Chess and Shogi.
- Also from Google DeepMind: “Specifying AI Safety Problems in Simple Environments.”
- Viktoriya Krakovna reports on NIPS 2017: “This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.”
- 80,000 Hours interviews Phil Tetlock and investigates the most important talent gaps in the EA community.
- From Seth Baum: “A Survey of AGI Projects for Ethics, Risk, and Policy.” And from the Foresight Institute: “AGI: Timeframes & Policy.”
- The Future of Life Institute is collecting proposals for a second round of AI safety grants, due February 18.