Once again, a reporter thinks our positions are the opposite of what they are

 |   |  Analysis

Perhaps the most accurate media coverage the Machine Intelligence Research Institute (MIRI) has yet received was a piece by legendary science author Carl Zimmer in Playboy. To give you a sense of how inaccurate most of our media coverage is, here’s a (translated) quote from some coverage of MIRI in a Franco-German documentary:

In San Francisco however, a society of young voluntary scientists believes in the good in robots. How naive! Here at [MIRI]…

Such a quote is amusing because, of course, the Machine Intelligence Research Institute has been saying for a decade that AI will by default be harmful to humanity, that it is extremely difficult to design a “moral machine,” that neither we nor anyone else knows how to do it yet, and that dozens of approaches proposed by others are naive and insufficient for one reason or another.

Now, in a new piece for The Sunday Times, Bryan Appleyard writes:

Yudkowsky [from MIRI] seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.

“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.

Again: MIRI has been saying for a decade that it is extremely difficult to program a machine to not kill a baby. Indeed, our position is that directly programming moral norms won’t work because our values are more complex than we realize. The direct programming of moral norms is something that others have proposed, and a position we criticize. For example, here is a quote from the concise summary of our research program:

Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.

But this doesn’t mean we can simply let some advanced machine learning algorithms observe human behavior to learn our moral values, because:

The explicit moral values of human civilization have changed over time, and we regard this change as progress, and extrapolate that progress may continue in the future. An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery… Possible bootstrapping algorithms include “do what we would have told you to do if we knew everything you knew,” “do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument,” and “do what we would tell you to do if we had your ability to reflect on and modify ourselves.” In moral philosophy, this notion of moral progress is known as reflective equilibrium.

Moving on… Appleyard’s point that “Machines would be much more radically adjusted away from human social norms, however we programmed them” is another point MIRI been making from the very beginning. See the warnings against anthropomorphism in “Creating Friendly AI” (2001) and “Artificial Intelligence as a Positive and Negative Factor in Global Risk” (written in 2005, published in 2008). AI mind designs will be far more “alien” to us than the minds of aliens appearing in movies.

Appleyard goes on to say:

[Compared to MIRI,] the Cambridge group has a much more sophisticated grasp of these issues. Price, in particular, is aware that machines will not be subject to social pressure to behave well.

“When you think of the forms intelligence might take,” Price says, “it seems reasonable to think we occupy some tiny corner of that space and there are many ways in which something might be intelligent in ways that are nothing like our minds at all.”

What Appleyard doesn’t seem to realize, here, is that Price is basically quoting a point long stressed by MIRI researcher Eliezer Yudkowsky — that there is a huge space of possible minds, and humans only occupy a tiny corner of that space. In fact, that point is associated with MIRI and Eliezer Yudkowsky more than with anyone else! Here’s another quote from the paper Yudkowsky wrote in 2005:

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds in general. The entire map floats in a still vaster space, the space of optimization processes… It is this enormous space of possibilities which outlaws anthropomorphism as legitimate reasoning.

So once again, a reporter thinks MIRI’s positions are the opposite of what they are.

Beware what you read in the popular media!

Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.

  • http://arcsecond.wordpress.com Mark Eichenlaub

    So the questions are:

    1) Why do reporters consistently get the wrong idea? If one reporter gets it all backwards, shame on the reporter. If many of them in a row get it wrong, SI needs to figure out what’s wrong with their PR.

    2) How do we fix it?

  • John Maxwell IV

    Evidence that a name change is a good idea? Feels like it’d be hard to get the mission of “The Institute for Safe AI” so radically wrong…

  • http://www.bryanappleyard.com Bryan Appleyard

    Please be more accurate. I gave a correct account of a conversation I had with Yudkowsky. If he misreported the views of the SI the mistake is his not mine.

    • Glenn Thomas Davis

      Mr Appleyard, I read your article. In it you include one short quote from Mr Yudkowsky, and state your judgments about his intellectual naivete. I don’t doubt that Yudkowsky said the words selectively culled from your conversation with him, but the context you put them in leads the reader unfamiliar with Yudkowsky’s work to think his views are radically different from what they are, which is exactly the point of Luke Muehlhauser’s post. Is it possible you didn’t understand what Yudkowsky was trying to explain to you in the conversation you had with him?

  • http://www.ciphergoth.org/ Paul Crowley

    Mentioned this story on Facebook, my friend Doug Clow commented: The media does not hate you, nor does it love you, but you are made out of story that it can use for something else.

  • Luke A Somers

    For the most part, reporters, especially whenever there isn’t a recording, are extremely unreliable. Teller spoke in timed segments for more than one reason. I myself have been quoted on several occasions and each time either my denotative and/or connotative meaning has been severely altered by cutting – and in some cases, just lying about what I said.

  • Michael Anissimov

    Is it our fault if this particular journalist is explicitly crafting an anti-tech narrative and we have a role to play in it? (The tagline for his book is “Do we love our machines so much that we risk becoming more like them? What will we lose if we do?”) We have been repeating many of the same points for a decade. Those who are interested in our positions can read what we have published and/or allow us to explain personally. If journalists decide to report on us in bad faith, there is little we can do. For those journalists who are truly interested in our positions, SI staff is ready and willing to cheerfully explain them. There are many great journalists out there. We love to talk with them!

  • James Barrat

    As a media person, I agree SI is open with access and ideas. But as long as other people are writing the widely published articles and books, they’ll be determining the narrative. I don’t have insight into SI’s PR operations. But I hope you’re seeding articles and press releases and cultivating long term relationships with journalists who’ll then write accurate accounts of your important work. From the outside, SI can seem a little solipsistic.

  • Pingback: Cambridge’s new existential risk project, and other killer robot news()

  • jim

    it seems like the singularity institute spends half its time trying to defend AI against hollywood machine on the verge of destroying humanity….science and hollywood, apples and oranges, you cant even begin to lump the 2 together