The [latest IPCC] report says, “If you put into place all these technologies and international agreements, we could still stop warming at [just] 2 degrees.” My own assessment is that the kinds of actions you’d need to do that are so heroic that we’re not going to see them on this planet.
—David Victor, professor of international relations at UCSD
A while back I attended a meeting of “movers and shakers” from science, technology, finance, and politics. We were discussing our favorite Big Ideas for improving the world. One person’s Big Idea was to copy best practices between nations. For example when it’s shown that nations can dramatically improve organ donation rates by using opt-out rather than opt-in programs, other countries should just copy that solution.
Everyone thought this was a boring suggestion, because it was obviously a good idea, and there was no debate to be had. Of course, they all agreed it was also impossible and could never be established as standard-practice. So we moved on to another Big Idea that was more tractable.
Later, at a meeting with a similar group of people, I told some economists that their recommendations on a certain issue were “straightforward econ 101,” and I didn’t have any objections to share. Instead, I asked, “But how can we get policy-makers to implement econ 101 solutions?” The economists laughed and said, “Well, yeah, we have no idea. We probably can’t.”
How do I put this? This is not a civilization that should be playing with self-improving AGIs.
The backhoe is a powerful, labor-saving invention, but I wouldn’t put a two-year-old in the driver’s seat. That’s roughly how I feel about letting 21st century humans wield something as powerful as self-improving AGI. I wish we had more time to grow up first. I think the kind of actions we’d need to handle self-improving AGI successfully “are so heroic that we’re not going to see them on this planet,” at least not anytime soon.
But I suspect we won’t all resist the temptation to build AGI for long, and neither do most top AI scientists. ((See the AI timeline predictions for the TOP100 poll in Müller & Bostrom (2014). The authors asked a sample of the top-cited living AI scientists: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for [an AGI] to exist?” The median reply for each confidence level was 2024, 2050, and 2070, respectively. Read more »