MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least a few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?
To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.pdf) so far.
We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren’t yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
- Jonah’s initial impressions about The Limits to Growth (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
- Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
- In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
- Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
- Upon initial investigation, two historical cases seemed like they might shed light on our core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
- We listed many other historical cases that may be worth investigating.
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s The Signal and the Noise, available here.
Further details are given below. For sources and more, please see our full email exchange (.pdf).
Read more »