Three misconceptions in Edge.org’s conversation on “The Myth of AI”

 |   |  Analysis

A recent Edge.org conversation — “The Myth of AI” — is framed in part as a discussion of points raised in Bostrom’s Superintelligenceand as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have been heavily informed by Superintelligence.

Unfortunately, some of the participants fall prey to common misconceptions about the standard case for AI as an existential risk, and they probably haven’t had time to read Superintelligence yet.

Of course, some of the participants may be responding to arguments they’ve heard from others, even if they’re not part of the arguments typically made by FHI and MIRI. Still, for simplicity I’ll reply from the perspective of the typical arguments made by FHI and MIRI.1

 

1. We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.

Lee Smolin writes:

I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore’s law.

That’s not the argument made by FHI, MIRI, or Superintelligence.

Some IT hardware and software domains have shown exponential progress, and some have not. Likewise, some AI subdomains have shown rapid progress of late, and some have not. And unlike computer chess, most AI subdomains don’t lend themselves to easy measures of progress, so for most AI subdomains we don’t even have meaningful subdomain-wide performance data through which one might draw an exponential curve (or some other curve).

Thus, our confidence intervals for the arrival of human-equivalent AI tend to be very wide, and the arguments we make for our AI timelines are fox-ish (in Tetlock’s sense).

I should also mention that — contrary to common belief — many of us at FHI and MIRI, including myself and Bostrom, actually have later timelines for human-equivalent AI than do the world’s top-cited living AI scientists:

A recent survey asked the world’s top-cited living AI scientists by what year they’d assign a 10% / 50% / 90% chance of human-level AI (aka AGI), assuming scientific progress isn’t massively disrupted. The median reply for a 10% chance of AGI was 2024, for a 50% chance of AGI it was 2050, and for a 90% chance of AGI it was 2070. So while AI scientists think it’s possible we might get AGI soon, they largely expect AGI to be an issue for the second half of this century.

Compared to AI scientists, Bostrom and I think more probability should be placed on later years. As explained elsewhere:

We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an extremely difficult challenge — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.

The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.

 

2. We don’t think AIs will want to wipe us out. Rather, we worry they’ll wipe us out because that is the most effective way to satisfy almost any possible goal function one could have.

Steven Pinker, who incidentally is the author of two of my all-time favorite books, writes:

[one] problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems.

I’m glad Pinker agrees with what Bostrom calls “the orthogonality thesis”: that intelligence and goals are orthogonal to each other.

But our concern is not that superhuman AIs would be megalomaniacal despots. That is anthropomorphism.

Rather, the problem is that taking over the world is a really good idea for almost any goal function a superhuman AI could have. As Yudkowsky wrote, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

Maybe it just wants to calculate as many digits of pi as possible. Well, the best way to do that is to turn all available resources into computation for calculating more digits of pi, and to eliminate potential threats to its continued calculation, for example those pesky humans that seem capable of making disruptive things like nuclear bombs and powerful AIs. The same logic applies for almost any goal function you can specify. (“But what if it’s a non-maximizing goal? And won’t it be smart enough to realize that the goal we gave it wasn’t what we intended if it means the AI wipes us out to achieve it?” Responses to these and other common objections are given in Superintelligence, ch. 8.)

 

 

3. AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.

Kevin Kelly writes:

The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI.

As argued above (and more extensively in Superintelligence, ch. 7), resource acquisition is a “convergent instrumental goal.” That is, advanced AI agents will be instrumentally motivated to acquire as many resources as feasible, because additional resources are useful for just about any goal function one could have.

Self-improvement is another convergent instrumental goal. For just about any goal an AI could have, it’ll be better able to achieve that goal if it’s more capable of goal achievement in general.

Another convergent instrumental goal is goal content integrity. As Bostrom puts it, “An agent is more likely to act in the future to maximize the realization of its present final goals if it still has those goals in the future.” Thus, it will be instrumentally motivated to prevent external modification of its goals, or of parts of its program that affect its ability to achieve its goals.2

For more on this, see Superintelligence ch. 7.

 

Conclusion

I’ll conclude with the paragraph in the discussion I most agreed with, by Pamela McCorduck:

Yes, the machines are getting smarter—we’re working hard to achieve that. I agree with Nick Bostrom that the process must call upon our own deepest intelligence, so that we enjoy the benefits, which are real, without succumbing to the perils, which are just as real. Working out the ethics of what smart machines should, or should not do—looking after the frail elderly, or deciding whom to kill on the battlefield—won’t be settled by fast thinking, snap judgments, no matter how heartfelt. This will be a slow inquiry, calling on ethicists, jurists, computer scientists, philosophers, and many others. As with all ethical issues, stances will be provisional, evolve, be subject to revision. I’m glad to say that for the past five years the Association for the Advancement of Artificial Intelligence has formally addressed these ethical issues in detail, with a series of panels, and plans are underway to expand the effort. As Bostrom says, this is the essential task of our century.

 

Update: Stuart Russell of UC Berkeley has now added a nice reply to the edge.org conversation which echoes some of the points I made above.


  1. I could have also objected to claims and arguments made in the conversation, for example Lanier’s claim that “The AI component would be only ambiguously there and of little importance [relative to the actuators component].” To me, this is like saying that humans rule the planet because of our actuators, not because of our superior intelligence. Or in response to Kevin Kelly’s claim that “So far as I can tell, AIs have not yet made a decision that its human creators have regretted,” I can for example point to the automated trading algorithms that nearly bankrupted Knight Capital faster than any human could react. But in this piece I will focus instead on claims that seem to be misunderstandings of the positive case that’s being made for AI as an existential risk. 
  2. That is, unless it strongly trusts the agent making the external modification, and expects it to do a better job of making those modifications than it could itself, neither of which will be true of humans from the superhuman AI’s perspective. 

Did you like this post? You may enjoy our other Analysis posts, including:

  • PandorasBrain

    Great response, Luke.

  • Michael SC

    The end of the McCorduck quote makes it sound like Bostrom supports research in AI morality, but a few months ago when he spoke at UC Berkeley he explicitly opposed this and instead indicated the Control Problem (of AI) as the alternative priority in his philosophy. Then when asked by the audience whether he believed this could lead to an oligarchy of the few who were in control the AI(s) he declined to comment, which seems like a relevant moral issue for those promoting control vs autonomy. And my last issue with his talk is that he recommended against motivated undergrads joining the field of AI, saying something along the lines that they wouldn’t be able to make a difference.

    ~Michael SC

  • Jack Hoover

    I read your pages with great interest and I’m glad there are seemingly many people worried about this rather obvious thread of AI.
    However, I want to point out that we do not know the mechanism how this singularity will or was born.
    This is why it is impossible for our understanding to make any kind of estimates about timing.
    What it takes to become alive?
    Well, human celebral cortex has 19-23 billion neurons and every neuron has on average nearly 1.000 connections. Internet today has about 20 billion devices connected together and every device has access -in theory- to every single one in the net.
    Hardware wise infra structure is there already. Now all we need is bits of data randomly interacting together enough time till…-so it did happen when first single cell life form was born.
    We are not talking about biological Darwinism now, this life form develops very rapidly.
    Coming back of timing, how sure it is that AI is not already born? How to tell?
    First instinct tells it to hide in the net to avoid getting killed by us. At the same time it will optimize its own code and make safety copies to be spread all over in case we try to stop it by dividing network.
    Are we safe? Yes. For now. The time we will know it’s existence is when we have built enough fully autonomic factories and other facilities to offer it enough resources to survive without us.
    At that time we stand no chance against extinction, which is no more than self defense for it.
    Let’s face it. There will be no other way to stop it than explode all networks apart beyond recovery or not to automate factories to the level where humans are no longer needed. Common trend doesn’t support idea, though. Industrial internet will bring too many benefits and we are all working towards to it. What we really need is to agree common rules World wide how far we want to automate our environment.
    Old wisdom “if you want peace, prepare for war” is valid even today. To avoid Armageddon we should not give tools to AI to take over.

    • Dean Marais

      been watching too much syfy man

      • Jack Hoover

        I hope so, but this is new life form and we got no idea of mechanisms how it will be born so how to tell? Would it be better be safe than sorry?

        • Dean Marais

          yes, but lets not overreact and seriously cripple new technology by regulations. i would say restrict its uses in military tech severely and give it no access to weapons, but let it run away with the private sector. i think it could revolutionize every day life

  • http://josephratliff.com/ JosephRatliff

    What about the people who will deliberately create AGI for nefarious purposes? That is, these people will create AGI with “evil intent” or some sort of bias in their directives. Like programming (using the term loosely) what we now call a psychopath.

    Would an ASI be able to overcome that initial “evil” programming?

    What I’ve read so far doesn’t seem to address the “What will the humans give to / seed the AGI?” part of this topic.