New Paper: “The errors, insights, and lessons of famous AI predictions”

 |   |  Papers

AI predictions paperDuring his time as a MIRI researcher, Kaj Sotala contributed to a paper now published in the Journal of Experimental & Theoretical Artificial Intelligence: “The errors, insights and lessons of famous AI predictions – and what they mean for the future.”


Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus’s criticism of AI, Searle’s Chinese room paper, Kurzweil’s predictions in the Age of Spiritual Machines, and Omohundro’s ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.

  • James Babcock

    Overall, this is a good guide to the considerations of evaluating predictions. However, I feel like the section on the Dartmouth Conference, and the general perception of the Dartmouth conference both here and in other sources I’ve looked at, is somewhat uncharitable. The general thesis is that they were wildly optimistic, but looking at the main primary source ( it looks more reasonable. This is mainly because there is a large gap between “a significant advance can be made in one or more of these problems” and “these problems can be solved fully”. Significant advances have in fact been made in all of the headline topics; but we’re tempted to judge them against goalposts that make sense now, instead of goalposts that made sense then.

    One other point – less a criticism and more a direction for further research – is in 5.2, which notes a “success–excitement–difficulties–stalling” cycle. I suspect that in many fields, the actual cycle is “success–excitement-difficulties–stalling–standardization–rebirth”. If AI exhibited such a pattern, then the resulting timeline predictions would be very different.

  • Victoria Krakovna

    The link to the paper on the All Publications page should be the one that doesn’t involve a paywall.

  • Paul Naish

    The paper should be free to access very shortly, apologies for the delay!