“So far as I can presently estimate, now that we’ve had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.
“[…I]t’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. […] You can either act despite that, or not act. Not act until it’s too late to help much, in the best case; not act at all until after it’s essentially over, in the average case.”
Read more in a new blog post by Eliezer Yudkowsky: “There’s No Fire Alarm for Artificial General Intelligence.” (Discussion on LessWrong 2.0, Hacker News.)
- New research write-ups and discussions: The Doomsday Argument in Anthropic Decision Theory; Smoking Lesion Steelman II
- New from AI Impacts: What Do ML Researchers Think You Are Wrong About?, When Do ML Researchers Think Specific Tasks Will Be Automated?
- “Is Tribalism a Natural Malfunction?”: Nautilus discusses MIRI’s work on decision theory, superrationality, and the prisoner’s dilemma.
- We helped run the 2017 AI Summer Fellows Program with the Center for Applied Rationality, and taught at the European Summer Program on Rationality.
- We’re very happy to announce that we’ve received a $100,000 grant from the Berkeley Existential Risk Initiative and Jaan Tallinn, as well as over $30,000 from Raising for Effective Giving and a pledge of $55,000 from PokerStars through REG. We’ll be providing more information on our funding situation in advance of our December fundraiser.
- LessWrong is currently hosting an open beta for a site redesign at lesserwrong.com; see Oliver Habryka’s strategy write-up.
News and links
- Hillary Clinton and Vladimir Putin voice worries about the impacts of AI technology.
- The Future of Life Institute discusses Dan Weld’s work on explainable AI.
- Researchers at OpenAI and Oxford release Learning with Opponent-Learning Awareness, an RL algorithm that takes into account how its choice of policy can change other agents’ strategy, enabling cooperative behavior in some simple multi-agent settings.
- From Carrick Flynn of the Future of Humanity Institute: Personal Thoughts on Careers in AI Policy and Strategy.
- Goodhart’s Imperius: A discussion of Goodhart’s Law and human psychology.