Update: we have finished the matching challenge! Thanks everyone! The original post is below. Thanks to the generosity of Peter Thiel, every donation made to MIRI between now and January 10th will be matched dollar-for-dollar, up to a total of $100,000! $0 $25K $50K $75K $100K We have reached our matching total of $100,000! 83 Total Donors… Read more »
Posts By: Luke Muehlhauser
Three misconceptions in Edge.org’s conversation on “The Myth of AI”
A recent Edge.org conversation — “The Myth of AI” — is framed in part as a discussion of points raised in Bostrom’s Superintelligence, and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by Superintelligence. Unfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and… Read more »
Video of Bostrom’s talk on Superintelligence at UC Berkeley
In September, MIRI hosted Nick Bostrom at UC Berkeley to discuss his new book Superintelligence. A video and transcript of that talk are now available from BookTV by C-SPAN, which also has a DVD of the event available.
A new guide to MIRI’s research
Nate Soares has written “A Guide to MIRI’s Research,” which outlines the main thrusts of MIRI’s current research agenda and provides recommendations for which textbooks and papers to study so as to understand what’s happening at the cutting edge. This guide replaces Louie Helm’s earlier “Recommended Courses for MIRI Math Researchers,” and will be updated regularly as… Read more »
The Financial Times story on MIRI
Richard Waters wrote a story on MIRI and others for Financial Times, which also put Nick Bostrom’s Superintelligence at the top of its summer science reading list. It’s a good piece. Go read it and then come back here so I can make a few clarifications. 1. Smarter-than-human AI probably isn’t coming “soon.” “Computers will soon become more intelligent…
Today we release a new technical report from MIRI research associate Tsvi Benson-Tilsen: “UDT with known search order.” Abstract: We consider logical agents in a predictable universe running a variant of updateless decision theory. We give an algorithm to predict the behavior of such agents in the special case where the order in which they… Read more »New report: “UDT with known search order”
Singularity2014.com appears to be a fake
Earlier today I was alerted to the existence of Singularity2014.com (archived screenshot). MIRI has nothing to do with that website and we believe it is a fake. The website claims there is a “Singularity 2014″ conference “in the Bay Area” on “November 9, 2014.” We believe that there is no such event. No venue is… Read more »
New paper: “Corrigibility”
Today we release a paper describing a new problem area in Friendly AI research we call corrigibility. The report (PDF) is co-authored by MIRI’s Friendly AI research team (Eliezer Yudkowsky, Benja Fallenstein, Nate Soares) and also Stuart Armstrong from the Future of Humanity Institute at Oxford University. The abstract reads: As artificially intelligent systems grow in intelligence… Read more »