Christof Koch and Stuart Russell on machine superintelligence

 |   |  News

Recently, Science Friday (hosted by Ira Flatow) featured an interview (page, mp3) with Christof Koch and Stuart Russell about machine superintelligence. Christof Koch is the Chief Scientific Officer of the Allen Institute for Brain Science, and Stuart Russell is a computer science professor at UC Berkeley, and co-author of the world’s most-used AI textbook.

I was glad to hear both distinguished guests take seriously the opportunities and risks of AGI. Those parts of the conversation are excerpted below:

Russell: Most [AI researchers] are like the Casters [from Transcendence]. They’re just trying to solve the problem. They want to make machines intelligent. They’re working on the next puzzle. There are [also] a number of people who think about what will happen when we succeed. I actually have a chapter in my book called, “What if we succeed?” If we make machines that are much smarter than human being, and this is what happens after Will Caster is uploaded, he has the massive computational resources to run his brain with, he becomes much more intelligent than humans.

If that happens, then it’s very difficult to predict what such a system would do and how to control it. It’s a bit like the old stories of the genie in the lamp or sorcerer’s apprentice. You can give your 3 wishes, but in all those stories, the 3 wishes backfire, and there’s always some loophole. The genie carries out what you ask to the utmost extent, and it’s never what you really wanted. The same is going to be true with machines. If we ask them, “Heal the sick, end human suffering,” perhaps the best way to end human suffering is to end human life altogether because then there won’t be anymore human suffering. That would be pretty big loophole. You can imagine how hard it is to write tax laws so there are no loopholes. We haven’t succeeded after 300 years of trying.

It’s very difficult to say what we would want a super intelligent machine to do so that we can be absolutely sure that the outcome is what we really want as opposed to what we say. That’s the issue. I think we, as a field, are changing, going through a process of realization that more intelligent is not necessarily better. We have to be more intelligent and controlled and safe, just like the nuclear physicist when they figured out chain reaction they suddenly realized, “Oh, if we make too much of a chain reaction, then we have a nuclear explosion.” So we need controlled chain reaction just like we need controlled artificial intelligence.

Flatow: Isn’t that true of all of science? The history of science whether we’re talking about genetic engineering in its early days, wondering about what’s going to crawl out of the lab once we start playing around the with the genome?

Koch: The difference now is that this [risk is an] existential threat to human society just like we’ll see once a nuclear genie was out of the bottle, we still live under the possibility within 20 minutes that all people we know will be obliterated under nuclear mushroom. Now we live with the possibility that over the next 50, 100 years this invention that we’re working on might be our final invention as Stuart has emphasized and that our future may be bleaker than we think.

[…]

Flatow: Do scientists have responsibilities to think about now the consequences of AI and are researchers organizing, talking, meeting about these things?

Russell: Absolutely. Yes. There’s a responsibility because if you’re working on a field whose success would probably be the biggest event in human history, and as some people predict, the last event in human history, then you’d better take responsibility. People are having meetings. In fact, I just organized one on Tuesday in Paris taking advantage of a major conference that was here. What I’m finding is that senior people in the field who have never publicly evinced any concern before are privately thinking that we do need to take this issue very seriously, and the sooner we take it seriously the better. I think the nuclear physicists wished they had taken it seriously much earlier than they did.

Flatow: Christof?

Koch: Yes. I fully have to agree, Stuart. I recently attended a meeting of physicists, and we had a large discussion and a poll about what are the biggest existential threat. To my surprise, after a nuclear war, it was the grey goo scenario or the AI run amok scenario. It wasn’t climate control or some of the other more conventional ones. There is a lot of concern among some experts what will happen to run away AI.

[…]

There is no law in the universe that says things are limited to human level intelligence. There may well be entities that are going to be much smarter than us, and as Stuart has said, we have no way to predict what will be their desires, what will be their motivations, and programming it in, we all know how buggy software is. Do we really want to rely on some standard programmers to program in something that will not wipe us out in one way or another?