3 misconceptions in Edge.org’s conversation on “The Myth of AI”

Posted by & filed under Analysis.

A recent Edge.org conversation — “The Myth of AI” — is framed in part as a discussion of points raised in Bostrom’s Superintelligence, and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by Superintelligence. Unfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and… Read more »

A new guide to MIRI’s research

Posted by & filed under News.

Nate Soares has written “A Guide to MIRI’s Research,” which outlines the main thrusts of MIRI’s current research agenda and provides recommendations for which textbooks and papers to study so as to understand what’s happening at the cutting edge. This guide replaces Louie Helm’s earlier “Recommended Courses for MIRI Math Researchers,” and will be updated regularly as… Read more »

The Financial Times story on MIRI

Posted by & filed under Analysis.

Richard Waters wrote a story on MIRI and others for Financial Times, which also put Nick Bostrom’s Superintelligence at the top of its summer science reading list. It’s a good piece. Go read it and then come back here so I can make a few clarifications.   1. Smarter-than-human AI probably isn’t coming “soon.” “Computers will soon become more intelligent…

New report: “UDT with known search order”

Posted by & filed under News.

Today we release a new technical report from MIRI research associate Tsvi Benson-Tilsen: “UDT with known search order.” Abstract: We consider logical agents in a predictable universe running a variant of updateless decision theory. We give an algorithm to predict the behavior of such agents in the special case where the order in which they… Read more »

Singularity2014.com appears to be a fake

Posted by & filed under News.

Earlier today I was alerted to the existence of Singularity2014.com (archived screenshot). MIRI has nothing to do with that website and we believe it is a fake. The website claims there is a “Singularity 2014″ conference “in the Bay Area” on “November 9, 2014.” We believe that there is no such event. No venue is… Read more »

New report: “Corrigibility”

Posted by & filed under News.

Today we release a report describing a new problem area in Friendly AI research we call corrigibility. The report (PDF) is co-authored by MIRI’s Friendly AI research team (Eliezer Yudkowsky, Benja Fallenstein, Nate Soares) and also Stuart Armstrong from the Future of Humanity Institute at Oxford University. The abstract reads: As artificially intelligent systems grow in… Read more »

AGI outcomes and civilizational competence

Posted by & filed under Analysis.

The [latest IPCC] report says, “If you put into place all these technologies and international agreements, we could still stop warming at [just] 2 degrees.” My own assessment is that the kinds of actions you’d need to do that are so heroic that we’re not going to see them on this planet. —David Victor,1 professor… Read more »