Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research.
Research updates
General updates
- We attended the Future of Life Institute’s Beneficial AI conference at Asilomar. See Scott Alexander’s recap. MIRI executive director Nate Soares was on a technical safety panel discussion with representatives from DeepMind, OpenAI, and academia (video), also featuring a back-and-forth with Yann LeCun, the head of Facebook’s AI research group (at 22:00).
- MIRI staff and a number of top AI researchers are signatories on FLI’s new Asilomar AI Principles, which include cautions regarding arms races, value misalignment, recursive self-improvement, and superintelligent AI.
- The Center for Applied Rationality recounts MIRI researcher origin stories and other cases where their workshops have been a big assist to our work, alongside examples of CFAR’s impact on other groups.
- The Open Philanthropy Project has awarded a $32,000 grant to AI Impacts.
- Andrew Critch spoke at Princeton’s ENVISION conference (video).
- Matthew Graves has joined MIRI as a staff writer. See his first piece for our blog, a reply to “Superintelligence: The Idea That Eats Smart People.”
- The audio version of Rationality: From AI to Zombies is temporarily unavailable due to the shutdown of Castify. However, fans are already putting together a new free recording of the full collection.
News and links
- An Asilomar panel on superintelligence (video) gathers Elon Musk (OpenAI), Demis Hassabis (DeepMind), Ray Kurzweil (Google), Stuart Russell and Bart Selman (CHCAI), Nick Bostrom (FHI), Jaan Tallinn (CSER), Sam Harris, and David Chalmers.
- Also from Asilomar: Russell on corrigibility (video), Bostrom on openness in AI (video), and LeCun on the path to general AI (video).
- From MIT Technology Review‘s “AI Software Learns to Make AI Software”:
- Companies must currently pay a premium for machine-learning experts, who are in short supply. Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.
- AlphaGo quietly defeats the world’s top Go professionals in a crushing 60-win streak. AI also bests the top human players in no-limit poker.
- More signs that artificial general intelligence is becoming a trendier goal in the field: FAIR proposes an AGI progress metric.
- Representatives from Apple and OpenAI join the Partnership on AI, and MIT and Harvard announce a new Ethics and Governance of AI Fund.
- The World Economic Forum’s 2017 Global Risks Report includes a discussion of AI safety: “given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent – or even morally obligatory – to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.”
- On the other hand, the JASON advisory group reports to the US Department of Defense that “the claimed ‘existential threats’ posed by AI seem at best uninformed,” adding, “In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.”
- Data scientist Sarah Constantin argues that ML algorithms are exhibiting linear or sublinear performance returns to linear improvements in processing power, and that deep learning represents a break from trend in image and speech recognition, but not in strategy games or language processing.
- New safety papers discuss human-in-the-loop reinforcement learning and ontology identification, and Jacob Steinhardt writes on latent variables and counterfactual reasoning in AI alignment.
|