When AI Accelerates AI
Last week, Nate Soares outlined his case for prioritizing long-term AI safety work:
1. Humans have a fairly general ability to make scientific and technological progress. The evolved cognitive faculties that make us good at organic chemistry overlap heavily with the evolved cognitive faculties that make us good at economics, which overlap heavily with the faculties that make us good at software engineering, etc.
2. AI systems will eventually strongly outperform humans in the relevant science/technology skills. To the extent these faculties are also directly or indirectly useful for social reasoning, long-term planning, introspection, etc., sufficiently powerful and general scientific reasoners should be able to strongly outperform humans in arbitrary cognitive tasks.
3. AI systems that are much better than humans at science, technology, and related cognitive abilities would have much more power and influence than humans. If such systems are created, their decisions and goals will have a decisive impact on the future.
4. By default, smarter-than-human AI technology will be harmful rather than beneficial. Specifically, it will be harmful if we exclusively work on improving the scientific capability of AI agents and neglect technical work that is specifically focused on safety requirements.
To which I would add:
- Intelligent, autonomous, and adaptive systems are already challenging to verify and validate; smarter-than-human scientific reasoners present us with extreme versions of the same challenges.
- Smarter-than-human systems would also introduce qualitatively new risks that can’t be readily understood in terms of our models of human agents or narrowly intelligent programs.
None of this, however, tells us when smarter-than-human AI will be developed. Soares has argued that we are likely to be able to make early progress on AI safety questions; but the earlier we start, the larger is the risk that we misdirect our efforts. Why not wait until human-equivalent decision-making machines are closer at hand before focusing our efforts on safety research?
One reason to start early is that the costs of starting too late are much worse than the costs of starting too early. Early work can also help attract more researchers to this area, and give us better models of alternative approaches. Here, however, I want to focus on a different reason to start work early: the concern that a number of factors may accelerate the development of smarter-than-human AI.
AI speedup thesis. AI systems that can match humans in scientific and technological ability will probably be the cause and/or effect of a period of unusually rapid improvement in AI capabilities.