Update: we have finished the matching challenge! Thanks everyone! The original post is below.
Thanks to the generosity of Peter Thiel,1 every donation made to MIRI between now and January 10th will be matched dollar-for-dollar, up to a total of $100,000!
Now is your chance to double your impact while helping us raise up to $200,000 (with matching) to fund our research program.
Corporate matching and monthly giving pledges will count towards the total! Check here to see whether your employer will match your donation. Please email firstname.lastname@example.org if you intend to make use of corporate matching, or if you’d like to pledge 6 months of monthly donations, so that we can properly account for your contributions. If making use of corporate matching, make sure to donate before the end of the year so that you don’t unnecessarily “leave free money on the table” from your employer!
If you’re unfamiliar with our mission, see: Why MIRI?
Other projects are being surveyed for likely cost and impact. See also our mid-2014 strategic plan.
We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward.
If you have questions about donating, please contact me (Luke Muehlhauser) at email@example.com
A recent Edge.org conversation — “The Myth of AI” — is framed in part as a discussion of points raised in Bostrom’s Superintelligence, and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by Superintelligence.
Unfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and they probably haven’t had time to read Superintelligence yet.
Of course, some of the participants may be responding to arguments they’ve heard from others, even if they’re not part of the arguments typically made by FHI and MIRI. Still, for simplicity I’ll reply from the perspective of the typical arguments made by FHI and MIRI.1
1. We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.
Lee Smolin writes:
I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore’s law.
That’s not the argument made by FHI, MIRI, or Superintelligence.
Some IT hardware and software domains have shown exponential progress, and some have not. Likewise, some AI subdomains have shown rapid progress of late, and some have not. And unlike computer chess, most AI subdomains don’t lend themselves to easy measures of progress, so for most AI subdomains we don’t even have meaningful subdomain-wide performance data through which one might draw an exponential curve (or some other curve).
In September, MIRI hosted Nick Bostrom at UC Berkeley to discuss his new book Superintelligence. A video and transcript of that talk are now available from BookTV by C-SPAN, which also has a DVD of the event available.
Update: Nick Bostrom has also made his slides for the talk available.
Nate Soares has written “A Guide to MIRI’s Research,” which outlines the main thrusts of MIRI’s current research agenda and provides recommendations for which textbooks and papers to study so as to understand what’s happening at the cutting edge.
This guide replaces Louie Helm’s earlier “Recommended Courses for MIRI Math Researchers,” and will be updated regularly as new lines of research open up, and as new papers and reports are released. It is not a replacement for our upcoming technical report on MIRI’s current research agenda and its supporting papers, which are still in progress. (“Corrigibility” is the first supporting paper we’ve released for that project.)
It’s a good piece. Go read it and then come back here so I can make a few clarifications.
1. Smarter-than-human AI probably isn’t coming “soon.”
“Computers will soon become more intelligent than us,” the story begins, but few experts I know think this is likely.
A recent survey asked the world’s top-cited living AI scientists by what year they’d assign a 10% / 50% / 90% chance of human-level AI (aka AGI), assuming scientific progress isn’t massively disrupted. The median reply for a 10% chance of AGI was 2024, for a 50% chance of AGI it was 2050, and for a 90% chance of AGI it was 2070. So while AI scientists think it’s possible we might get AGI soon, they largely expect AGI to be an issue for the second half of this century.
Moreover, many of those who specialize in thinking about AGI safety actually think AGI is further away than the top-cited AI scientists do. For example, relative to the surveyed AI scientists, Nick Bostrom and I both think more probability should be placed on later years. We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an extremely difficult challenge — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.
The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.
Today we release a new technical report from MIRI research associate Tsvi Benson-Tilsen: “UDT with known search order.” Abstract:
We consider logical agents in a predictable universe running a variant of updateless decision theory. We give an algorithm to predict the behavior of such agents in the special case where the order in which they search for proofs is simple, and where they know this order. As a corollary, “playing chicken with the universe” by diagonalizing against potential spurious proofs is the only way to guarantee optimal behavior for this class of simple agents.