MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least a few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?
To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.pdf) so far.
We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren’t yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
- Jonah’s initial impressions about The Limits to Growth (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
- Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
- In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
- Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
- Upon initial investigation, two historical cases seemed like they might shed light on our core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
- We listed many other historical cases that may be worth investigating.
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s The Signal and the Noise, available here.
Further details are given below. For sources and more, please see our full email exchange (.pdf).
The Limits to Growth
In his initial look at The Limits to Growth (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that Limits to Growth predicted a sort of doomsday scenario – ala Erlich’s The Population Bomb (1968) – that had failed to occur. In particular, it appeared that Limits to Growth had failed to appreciate Julian Simon‘s point that other resources would substitute for depleted resources.
Upon reading the book, Jonah found that:
- The book avoids strong, unconditional claims. Its core claim is that if exponential growth of resource usage continues, then there will likely be a societal collapse by 2100.
- The book was careful to qualify its claims, and met high epistemic standards. Jonah wrote: “The book doesn’t look naive even in retrospect, which is impressive given that it was written 40 years ago. “
- The authors discuss substitutability at length in chapter 4.
- The book discusses mitigation at a theoretical level, but doesn’t give explicit policy recommendations, perhaps because the issues involved were too complex.
Svante Arrhenius
Derived more than a century ago, Svante Arrhenius‘ equation for how the Earth’s temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius’ climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.
Arrhenius’s predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.
Norbert Weiner
As Jonah explains, Norbert Weiner (1894-1964) “believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression.” Nearly 50 years after his death, this doesn’t seem to have happened much, though it may eventually happen.
Jonah’s impression is that Weiner had strong views on the subject, doesn’t seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what Berlin (1953) and Tetlock (2005) described as “hedgehog” thinking: “the fox knows many things, but the hedgehog knows one big thing.”
Some historical cases that seem unlikely to shed light on our questions
Rasmussen (1975) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn’t very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.
In 1936, Leó Szilárd assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:
I think that this isn’t a good example of a nontrivial future prediction. The destructive potential seems pretty obvious – anything that produces a huge amount of concentrated energy can be used in a destructive way. As for the Nazis, Szilard was himself Jewish and fled from the Nazis, and it seems pretty obvious that one wouldn’t want a dangerous regime to acquire knowledge that has destructive potential. It would be more impressive if the early developers of quantum mechanics had kept their research secret on account of dimly being aware of the possibility of destructive potential, or if Szilard had filed his patent secretly in a hypothetical world in which the Nazi regime was years away.
Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was “too difficult to tie these efforts to war outcomes.”
Jonah also investigated Kaj Sotala’s A brief history of ethically concerned scientists. Most of the historical cases cited there didn’t seem relevant to this project. Many cases involved “scientists concealing their discoveries out of concern that they would be used for military purposes,” but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see Kelly 2011). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.
Some historical cases that might shed light on our questions with much additional research
Jonah performed an initial investigation of the impacts of China’s one-child policy, and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy’s impacts.
Jonah also investigated a case involving the Ford Foundation. In a conversation with GiveWell, Lant Pritchett said:
[One] example of transformative philanthropy is related to India’s recovery from its economic crisis of 1991. Other countries had previously had similar crises and failed to implement good policies that would have allowed them to recover from their crises. By way of contrast, India implemented good policies and recovered in a short time frame. Most of the key actors who ensured that India implemented the policies that it did were influenced by a think tank established by the Ford Foundation ten years before the crisis. The think tank exposed Indians to relevant ideas from the developed world about liberalization. The difference between (a) India’s upward economic trajectory and (b) what its upward economic trajectory would have been if it had been unsuccessful in recovering from the 1991 crisis is in the trillions of dollars. As such, the Ford Foundation’s investment in the think tank had a huge impact. For the ten years preceding the crisis, it looked like the think tank was having no impact, but it turned out to have a huge impact.
Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.
Other historical cases that might be worth investigating
Historical cases we identified but did not yet investigate include:
- Eric Drexler‘s early predictions about the feasibility and likely effects of nanotechnology.
- The Asilomar conference on recombinant DNA
- Efforts to detect asteroids before they threaten Earth
- The Green Revolution
- The modern history of cryptography
- Early efforts to mitigate global warming
- Possible deliberate long term efforts to produce scientific breakthroughs (the transistor? the human genome?)
- Rachel Carson’s Silent Spring (1962)
- Paul Ehrlich’s The Population Bomb (1968)
- The Worldwatch Institute’s State of the World reports (since 1984)
- The WCED’s Our Common Future (1987)
Did you like this post? You may enjoy our other Analysis posts, including: