The Power of Intelligence

 |   |  Analysis

In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.

Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.

Then came the Day of the Squishy Things.

They had no armor. They had no claws. They had no venoms.

If you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.

In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.

And as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.

Even if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.

Now explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.

I have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept booksmarts – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.

“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first Homo sapiens had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists – for us, not mice or wasps – because we go on believing in it.

I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in The Rain Man, it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.

People – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be commercialized. This is what we call a framing problem.

Or maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish all these things at once seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.

And so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.

And well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even know what our real problems are.

But meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.

Well, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.

  • Pingback: The Power of Intelligence -

  • James R Hughes, MD,PhD

    Very well written:) The gray goo in our skull will be critical to the future of AI, and the dexterity of our frail shell will be eventually enhanced rather than replaced. There is an intelligence of dexterity and adaptability of the human form, that AI is still far from duplicating. We will evolve.

  • Roko

    “People – venture capitalists in particular – sometimes ask how, if the Singularity Institute successfully builds a true AI, the results will be commercialized.”

    I’m sitting here trying to imagine Eliezer et al at a meeting with some hard-nosed venture capitalists. It sounds like it could be hilarious, if only the fate of the world didn’t rest on it…


    Hard-nosed venture capitalist: So tell us, once you’ve got it working, how will you make people pay for it? Are you thinking of charging for AI subscription on a monthly basis or are you going for a lump-sum payment?

    SIAI: (pause) um actually we anticipate that a scarcity-based economy will cease to exist soon after the singularity. Money will no longer have value.

    Hard-nosed venture capitalist: Come again?

    SIAI: The AI will give people what they want because we’ve programed it to be friendly. There will be no hunger, no scarcity of housing, clothes, gadgets, etc.

    Hard-nosed venture capitalist: That doesn’t sound so good… I don’t think you’ve thought your *business strategy* through … Maybe you should put some DRM in your friendliness software! I know, you could call it a pay-as-you-go-friendly AI; it’s only friendly to people if they keep paying the monthly subscription fees. I think you could market that really well! Or, better still, why not forget the friendliness software altogether and instead just program the AI to make you as much money as it can, by whatever means it chooses – now that’s thinking out of the box, eh!

    SIAI: (holding their heads in their hands weeping)

    • Eliezer Yudkowsky

      I bet you think you’re joking.

      • Nick Tarleton

        You mean someone’s actually suggested that?

        (holds head in hands and weeps)

      • Tom McCabe

        “I bet you think you’re joking.”

        Maybe if you explain to them that your goal isn’t to make lots of money, but to advance human civilization, like if the Romans had started a large project to mechanize the production of clothing. It’s not like SIAI is going to show a profit in fiscal terms.

    • Sam

      Umm…scarcity is an inherent property of the universe. As long as there are sentient beings there will be transactions; as long as there are transactions there will be money. It doesn’t matter how much *any* technology improves the world–no technology is going to be omnipotent; no technology can create resources out of wishes. Money is necessary to the survival of intelligent life.

  • Roko

    On a more serious note, I don’t envy you – the task of trying to sell an invention that will probably be the end of money is not an easy one, and it’s probably especially hard to sell to people whose whole lives run on the base assumption that there will always be money and markets. I’d be interested to hear what actually happens in these situations. What do they say?

    Anyway, great article. 🙂

  • Tyler Emerson

    Hilarious. Yes, if only you were joking. Maybe some time we’ll tape an SIAI Salon Dinner.

  • Roko

    I get the impression that my attempts at humor are painfully close to the truth… Well, all I can say is good luck finding some people who have both money and enough sense to realize that there is a better world out there beyond money, marketing and the executive golf club. If I weren’t a poor student, I’d be donating a good chunk of cash myself.

  • Stephen Burrows

    There is a non zero probability that an artificial general intelligence will first be created by some one, or some group, without affiliations with any formal organization. The first powered aircraft was designed, built, and tested entirely in secret by self made experts, once the technology required became cheap enough for wide access. I predict the same will be true with AGI. The trick here is to get the idea of making the first one “nice” out to where those making the attempt are aware of that crucial requirement.

  • Jeffrey Herrlich

    “People – venture capitalists in particular – sometimes ask how, if the Singularity Institute successfully builds a true AI, the results will be *commercialized*. This is what we call a framing problem.”

    I’m speaking entirely for myself here, and not for SIAI.

    Perhaps it would be good to explain to the VCs that their money will have no value whatsoever if they and everyone else no longer exists, because Friendly AI never received enough early funding. They need to understand that with a Friendly AI their personal quality-of-life will be unimaginably better than anything they could possibly purchase by traditional means – no matter *how* much money they have.

    I know that it would probably be difficult for SIAI to make this presentation, entirely tactfully. But sometimes even very smart people are in need of a reality-check. You can show them this post by me, and I’ll take the blame for not being tactful. 😉

  • Gully Foyle

    This is the same problem that Babbage faced when he sought funding for his difference engine. No one believed a single machine could do all manor of things by simply reconfiguring it.

  • Fredrik

    Venture capitalists are survivors. They are not stupid. They will ask a couple of stupid questions. It’s their job. But they actually do listen to you and some of them will understand, and some will not. Just like anybody else …

    • Jeffrey Herrlich

      No they’re not stupid, but it appears that the majority of them *don’t* truly understand, or at least they don’t accept the situation viscerally. It’s definitely an unfortunate situation at this time, hopefully it can be changed.

  • Fredrik

    Eliezer: What a FANTASTIC story! Thank you!

  • Pingback: Life, the Universe, and Everything » If Singularity Were Like Nanotechnology()

  • Grant Czerepak

    The concept of migrating intelligence from the strata of carbon based organisms to another strata is still hard to conceive. But it is obvious that even though the body or id may change, the soul and spirit or ego and superego, depending on you perspective, are still inseparable at this point.

    What I am trying to say is we may create an intelligence, but we also have to create an ethical foundation that guides that intelligence before we unleash it. And the more I learn about the relativistic reasoning of the human mind when making ethical judgments the less faith I have in our ability to create what we would deem an ethical AI from our own perspective. Half the world could consider the AI friendly and the other half could consider it an abomination.

  • Davy

    Perhaps we should first create an ethical foundation that guides our intelligence.

  • Steven Thomas

    This AI is going to have to be very smart to overcome the problem of limited resources, and smarter still to overcome the conflicts that arise from the other functions of our brains, such as instincts and passions. Our planet can only accommodate so many people. Will this AI teach us how to live simply and fairly? Will this AI solve jealousy, gluttony, xenophobia, religious fundamentalism, recklessness, vengeance, and other basic tensions of primate social structures? Can it solve these problems, and leave us human?

  • denis bider


    “Ethics”. What we’re talking about here is no ethics. We don’t want the supermachine to be “ethical”. We ourselves are not “ethical”, especially not towards inferior things. No one (except in limited cases) expects us to be ethical towards inferior things, and no one (except in limited cases) actually enforces our being ethical towards inferior things. Our “ethicism” in this regard is fraught with inconsistency. On the one hand we have PETA and the pet police. On the other hand we slaughter pigs and cows and create chicken who can’t stand upright because we’ve made their breasts grow too big. We go to the supermarket and buy a bag of shrimp that have been slaughtered by the billion. We have no bad thoughts about that.

    We operate on no “ethics”.

    What you’re talking about here with “friendly AI” and “ethical AI” doesn’t involve any lofty principles either. The basic principle you wish to imbue your AI with is something like: “thou shalt not harm thy creator”. This is pure human selfishness, and our ability to imbue a vastly superior being with that concept may be very limited. After all, nature has imbued us with a desire to have sex in order to reproduce, and so we invented condoms. A superior machine can invent all kinds of workarounds to avoid the inconvenience of our flimsy, built-in “ethics”.

    If someone succeeds in building strong AI, and the creature doesn’t kill itself immediately upon coming into existence, then it’s pretty much over for the human race. So far we’ve argued against religious believers that we can’t see god. Well, if strong AI does come into existence, there won’t be any doubt, we’ll all see one.

    That said, I don’t really see any incentive for a VC to fund something like this. A VC is a creature that operates and makes sense within the confines of a certain environment. A VC, as an organization, doesn’t necessarily have any incentive to change or revolutionize that environment. Nor does it have the authority to do it.

    Your best bet for funding would be rich people who can dispense with their own money without having to answer to anyone.

  • Richard Hollerith

    A superior machine can invent all kinds of workarounds to avoid the inconvenience of our flimsy, built-in “ethics”

    But a correctly built superior machine has no goals or preferences other than what the designers intentially give it, so it will not want to invent a workaround.

    I hope you will read more of Eliezer’s writings on AI. For example, Knowability of AI

  • Pingback: Day of the Squishy Things | Isotropic()