The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!

 |   |  News

ai-foom-coverIn late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.

The debate is now available as an eBook in various popular formats (PDF, EPUB, and MOBI). It includes:

  • the original series of blog posts,
  • a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject,
  • a summary of the debate written by Kaj Sotala, and
  • a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.

Comments from the authors are included at the end of each chapter, along with a link to the original post.

Head over to intelligence.org/ai-foom-debate/ to download a free copy.

Stephen Hsu on Cognitive Genomics

 |   |  Conversations

Steve Hsu portraitStephen Hsu is Vice-President for Research and Graduate Studies and Professor of Theoretical Physics at Michigan State University. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon. He was also founder of SafeWeb, an information security startup acquired by Symantec. Hsu is a scientific advisor to BGI and a member of its Cognitive Genomics Lab.

Luke Muehlhauser: I’d like to start by familiarizing our readers with some of the basic facts relevant to the genetic architecture of cognitive ability, which I’ve drawn from the first half of a presentation you gave in February 2013:

Read more »

MIRI’s November 2013 Workshop in Oxford

 |   |  News

013

From November 23-29, MIRI will host another Workshop on Logic, Probability, and Reflection, for the first time in Oxford, UK.

Participants will investigate problems related to reflective agentsprobabilistic logic, and priors over logical statements / the logical omniscience problem.

Participants confirmed so far include:

If you have a strong mathematics background and might like to attend this workshop, it’s not too late to apply! And even if this workshop doesn’t fit your schedule, please do apply, so that we can notify you of other workshops (long before they are announced publicly).

Transparency in Safety-Critical Systems

 |   |  Analysis

In this post, I aim to summarize one common view on AI transparency and AI reliability. It’s difficult to identify the field’s “consensus” on AI transparency and reliability, so instead I will present a common view so that I can use it to introduce a number of complications and open questions that (I think) warrant further investigation.

Here’s a short version of the common view I summarize below:

Black box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.

The value of transparency in system design

Nusser (2009) writes:

…in the field of safety-related applications it is essential to provide transparent solutions that can be validated by domain experts. “Black box” approaches, like artificial neural networks, are regarded with suspicion – even if they show a very high accuracy on the available data – because it is not feasible to prove that they will show a good performance on all possible input combinations.

Unfortunately, there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent:

Methods that are known to achieve a high predictive performance — e.g. support vector machines (SVMs) or artificial neural networks (ANNs) — are usually hard to interpret. On the other hand, methods that are known to be well-interpretable — for example (fuzzy) rule systems, decision trees, or linear models — are usually limited with respect to their predictive performance.1

But for safety-critical systems — and especially for AGI — it is important to prioritize system reliability over capability. Again, here is Nusser (2009):

strict requirements [for system transparency] are necessary because a safety-related system is a system whose malfunction or failure can lead to serious consequences — for example environmental harm, loss or severe damage of equipment, harm or serious injury of people, or even death. Often, it is impossible to rectify a wrong decision within this domain.

Read more »


  1. Quote from Nusser (2009). Emphasis added. The original text contains many citations which have been removed in this post for readability. Also see Schultz & Cronin (2003), which makes this point by graphing four AI methods along two axes: robustness and transparency. Their graph is available here. In their terminology, a method is “robust” to the degree that it is flexible and useful on a wide variety of problems and data sets. On the graph, GA means “genetic algorithms,” NN means “neural networks,” PCA means “principal components analysis,” PLS means “partial least squares,” and MLR means “multiple linear regression.” In this sample of AI methods, the trend is clear: the most robust methods tend to be the least transparent. Schultz & Cronin graphed only a tiny sample of AI methods, but the trend holds more broadly. 

Holden Karnofsky on Transparent Research Analyses

 |   |  Conversations

Holden Karnofsky is the co-founder of GiveWell, which finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. GiveWell tracked ~$9.6 million in donations made on the basis of its recommendations in 2012. It has historically sought proven, cost-effective, scalable giving opportunities, but its new initiative, GiveWell Labs, is more broadly researching the question of how to give as well as possible.

Luke Muehlhauser: GiveWell has gained respect for its high-quality analyses of some difficult-to-quantify phenomena: the impacts of particular philanthropic interventions. You’ve written about your methods for facing this challenge in several blog posts, for example (1) Futility of standardized metrics: an example, (2) In defense of the streetlight effect, (3) Why we can’t take expected value estimates literally, (4) What it takes to evaluate impact, (5) Some considerations against more investment in cost-effectiveness estimates, (6) Maximizing cost-effectiveness via critical inquiry, (7) Some history behind our shifting approach to research, (8) Our principles for assessing research, (9) Surveying the research on a topic, (10) How we evaluate a study, and (11) Passive vs. rational vs. quantified.

In my first question I’d like to ask about one particular thing you’ve done to solve one particular problem with analyses of difficult-to-quantify phenomena. The problem I have in mind is that it’s often difficult for readers to know how much they should trust a given analysis of a difficult-to-quantify phenomenon. In mathematics research it’s often pretty straightforward for other mathematicians to tell what’s good and what’s not. But what about analyses that combine intuitions, expert opinion, multiple somewhat-conflicting scientific studies, general research in a variety of “soft” sciences, and so on? In such cases it can be difficult for readers to distinguish high-quality and low-quality analyses, and it can be hard for readers to tell whether the analysis is biased in particular ways.

Read more »

2013 Summer Matching Challenge Completed!

 |   |  News

Thanks to the generosity of dozens of donors, on August 15th we successfully completed the largest fundraiser in MIRI’s history. All told, we raised $400,000, which will fund our research going forward.

This fundraiser came “right down to the wire.” At 8:45pm Pacific time, with only a few hours left before the deadline, we announced on our Facebook page that we had only $555 more to raise to meet our goal. At 8:53pm, Benjamin Hoffman donated exactly $555, finishing the drive.

Our deepest thanks to all our supporters!

Luke at Quixey on Tuesday (Aug. 20th)

 |   |  News

EA & EotW

This coming Tuesday, MIRI’s Executive Director Luke Muehlhauser will give a talk at Quixey titled Effective Altruism and the End of the World. If you’re in or near the South Bay, you should come! Snacks will be provided.

Time: Tuesday, August 20th. Doors open at 7:30pm. Talk starts at 8pm. Q&A starts at 8:30pm.

Place: Quixey Headquarters, 278 Castro St., Mountain View, CA. (Google Maps)

Entrance: You cannot enter Quixey from Castro St. Instead, please enter through the back door, from the parking lot at the corner of Dana & Bryant.

August Newsletter: New Research and Expert Interviews

 |   |  Newsletters



Greetings from the Executive Director

Dear friends,

My personal thanks to everyone who has contributed to our ongoing fundraiser. We are 74% of the way to our goal!

I’ve been glad to hear from many of you that you’re thrilled with the progress we’ve made in the past two years — progress both as an organization and as a research institute. I’m thrilled, too! And to see a snapshot of where MIRI is headed, take a look at the participant lineup for our upcoming December workshop. Some top-notch folks there, including John Baez.

We’re also preparing for the anticipated media interest in James Barrat’s forthcoming book, Our Final Invention: Artificial Intelligence and the End of the Human Era. The book reads like a detective novel, and discusses our research extensively. Our Final Invention will be released on October 1st by a division of St. Martin’s Press, one of the largest publishers in the world.

If you’re happy with the direction we’re headed in, and you haven’t contributed to our fundraiser yet, please donate now to show your support. Even small donations can make a difference. This newsletter is ~9,860 subscribers strong, and ~200 of you have contributed during the current fundraiser. If just 21% of the other 9,660 subscribers give $25 as soon as they finish reading this sentence, then we’ll meet our goal will those funds alone!

Thank you,

Luke Muehlhauser

Executive Director

Read more »