What is Intelligence?

 

 |   |  Analysis

brain of gears and circuitsWhen asked their opinions about “human-level artificial intelligence” — aka “artificial general intelligence” (AGI)1 — many experts understandably reply that these terms haven’t yet been precisely defined, and it’s hard to talk about something that hasn’t been defined.2 In this post, I want to briefly outline an imprecise but useful “working definition” for intelligence we tend to use at MIRI. In a future post I will write about some useful working definitions for artificial general intelligence.

 

Imprecise definitions can be useful

Precise definitions are important, but I concur with Bertrand Russell that

[You cannot] start with anything precise. You have to achieve such precision… as you go along.

Physicist Milan Ćirković agrees, and gives an example:

The formalization of knowledge — which includes giving precise definitions — usually comes at the end of the original research in a given field, not at the very beginning. A particularly illuminating example is the concept of number, which was properly defined in the modern sense only after the development of axiomatic set theory in the… twentieth century.3

For a more AI-relevant example, consider the concept of a “self-driving car,” which has been given a variety of vague definitions since the 1930s. Would a car guided by a buried cable qualify? What about a modified 1955 Studebaker that could use sound waves to detect obstacles and automatically engage the brakes if necessary, but could only steer “on its own” if each turn was preprogrammed? Does that count as a “self-driving car”?

What about the “VaMoRs” of the 1980s that could avoid obstacles and steer around turns using computer vision, but weren’t advanced enough to be ready for public roads? How about the 1995 Navlab car that drove across the USA and was fully autonomous for 98.2% of the trip, or the robotic cars which finished the 132-mile off-road course of the 2005 DARPA Grand Challenge, supplied only with the GPS coordinates of the route? What about the winning cars of the 2007 DARPA Grand Challenge, which finished an urban race while obeying all traffic laws and avoiding collisions with other cars? Does Google’s driverless car qualify, given that it has logged more than 500,000 autonomous miles without a single accident under computer control, but still struggles with difficult merges and snow-covered roads?4

Our lack of a precise definition for “self-driving car” doesn’t seem to have hindered progress on self-driving cars very much.5 And I’m glad we didn’t wait to seriously discuss self-driving cars until we had a precise definition for the term.

Similarly, I don’t think we should wait for a precise definition of AGI before discussing the topic seriously. On the other hand, the term is useless if it carries no information. So let’s work our way toward a stipulative, operational definition for AGI. We’ll start by developing an operational definition for intelligence.

A definition for “intelligence”

Legg and Hutter (2007) found that definitions of intelligence converge toward the idea that “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” Let’s call this the “optimization power” concept of intelligence, because it measures an agent’s power to optimize the world according to its preferences.

I think this is a productive approach to the issue, since it identifies intelligence with externally measurable performance rather than with the details of how that performance might be achieved (e.g. via consciousness, brute force calculation, “complexity,” or something else). Moreover, it’s usually performance we care about: we tend to care most about whether an AI will perform well enough to replace human workers, or whether it will perform well enough improve its own abilities without human assistance, not whether it has some particular internal feature.6

Furthermore, the concept of optimization power allows us to compare the intelligence of different kinds of agents. As Albus (1991) said, “A useful definition of intelligence… should include both biological and machine embodiments, and these should span an intellectual range from that of an insect to that of an Einstein, from that of a thermostat to that of the most sophisticated computer system that could ever be built.”

I’d like to add one more consideration, though. What if two agents have roughly equal ability to optimize the world according to their preferences, but the second agent requires far more resources to do so? These agents have the same optimization power, but the first one seems to be optimizing more intelligently. So perhaps we could use “intelligence” to mean “optimization power divided by resources used” — what Yudkowsky called efficient cross-domain optimization.7

Other definitions8 have their merits, too. But at MIRI we find the concept of “efficient cross-domain optimization” sufficiently useful that it serves as our (still imprecise!) working definition for intelligence.

In a future post, I’ll discuss some useful working definitions for artificial general intelligence.

 


  1. I use the HLAI and AGI interchangeably, but lately I’ve been using AGI almost exclusively, because I’ve learned that many people in the AI community react negatively to any mention of “human-level” AI but have no objection to the concept of narrow vs. general intelligence. See also Ben Goertzel’s comments here
  2. Asked when he thought HLAI would be created, Pat Hayes (a past president of AAAI) replied: “I do not consider this question to be answerable, as I do not accept this (common) notion of ‘human-level intelligence’ as meaningful.” Asked the same question, AI scientist William Uther replied: “You ask a lot about ‘human level AGI’. I do not think this term is well defined,” while AI scientist Alan Bundy replied: “I don’t think the concept of ‘human-level machine intelligence’ is well formed.” 
  3. Sawyer (1943) gives another example: “Mathematicians first used the sign √-1, without in the least knowing what it could mean, because it shortened work and led to correct results. People naturally tried to find out why this happened and what √-1 really meant. After two hundreds years they succeeded.” Dennett (2013) makes a related comment: “Define your terms, sir! No, I won’t. That would be premature… My [approach] is an instance of nibbling on a tough problem instead of trying to eat (and digest) the whole thing from the outset… In Elbow Room, I compared my method to the sculptor’s method of roughing out the form in a block of marble, approaching the final surfaces cautiously, modestly, working by successive approximation.” 
  4. With self-driving cars, researchers did use many precise external performance measures (e.g. accident rates, speed, portion of the time they could run unassisted, frequency of getting stuck) to evaluate progress, as well as internal performance metrics (speed of search, bounded loss guarantees, etc.). Researchers could see that these bits of progress were in the right direction, even if their relative contribution long-term was unclear. And so it is with AI in general. AI researchers use many precise external and internal performance measures to evaluate progress, but it is difficult to know the relative contribution of these bits of progress toward the final goal of AGI. 
  5. Heck, we’ve had pornography for millennia and still haven’t been able to define it precisely. Encyclopedia entries for “pornography” often simply quote Justice Potter Stewart: “I shall not today attempt further to define the kinds of material I understand to be [pornography]… but I know it when I see it.” 
  6. We might care about whether machines are conscious in addition to being intelligent, but we already have a convenient term for that: consciousness. In particular, we might care about machine consciousness because the slow, plodding invention of AI may involve the creation and destruction of millions of partially-conscious near-AIs that are switched on, suffer for a while, and are then switched off — all while being unable to signal to us that they are suffering. This is especially likely if we remain unclear about the nature of consciousness for several more decades, and thus have no principled way (e.g. via nonperson predicates) to create intelligent machines that we know are not conscious (and are thus incapable of suffering). One of the first people to make this point clearly was Metzinger (2003), p. 621: “What would you say if someone came along and said, ‘Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development—we urgently need some funding for this important and innovative kind of research!’ You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby—no representatives in any ethics committee.” Metzinger repeats the point in Metzinger (2010), starting on page 194. 
  7. Admittedly, this is still pretty vague. One step toward precision would be to propose a definition of intelligence as optimization power for some canonical distribution of possible preferences, over some canonical distribution of environments, with a penalty for resource use. The canonical preferences and canonical environments could be weighted toward preferences and environments relevant to our concerns: we care more about whether AIs can do science than whether they can paint abstract art, and we care more about whether they can achieve their goals in our solar system than whether they can achieve their goals inside a black hole. Also see Goertzel (2010)‘s “efficient pragmatic general intelligence.” 
  8. Hibbard (2011); Legg & Veness (2011); Wang (2008); Schaul et al. (2011); Dowe & Hernandez-Orallo (2012); Goertzel (2010); Adams et al. (2011)
  • http://www.stafforini.com/ Pablo Stafforini

    I don’t quite see the point of this exercise. As David Chalmers noted, the argument for the claim that there will soon be an intelligence explosion can be formulated in a way that makes no use whatever of the concept of intelligence. The relevant notion is instead that of a self-amplifying cognitive capacity. All that is needed is the thesis that we can create systems surpassing humans in that cognitive capacity, and the thesis that this capacity is correlated with changes in some morally relevant capacity (such as the capacity to implement CEV or the capacity to realize hedonium). See Chalmers’s paper (sect 3) for a more detailed discussion.

    • http://CommonSenseAtheism.com lukeprog

      When discussing the future of AI, many people ask “But what do you mean by ‘intelligence’?” The purpose of this post is to answer that question. And I also agree with Chalmers’ points in section 3 of his paper.

      • Bastian Stern

        I don’t fully follow your response, Luke. Pablo’s point is that the answer to that question doesn’t particularly matter. In that case, why doesn’t pointing out that/why the answer doesn’t affect your argument constitute a fully adequate response to people who ask that question?

  • mambo_bab

    Definitely imprecise definitions can be useful.

    I would like to say…
    In the same way it is said that consciousness has not been precisely defined. However actually I think that outline thought of consciousness will be found out in a year or so. Probably its basic principle would be very simple, but from only outsider’s view point the phenomenons seem different. In consciousness case, my hypothesis is [memory to association to prediction] is the basic thought (respecting Jeff Hawkins theory), and it leads to metacognition, free will, instinct and emotion also (exceeding Jeff Hawkins theory).

    • David_Rogers_Hunt

      I always like the idea that consciousness is largely synonymous with awareness. When an organism is experiencing pain, pleasure, hunger, thirst, or lust, that organism is necessarily aware of the sensation it is experiencing. For humans, who can experience these animal qualities, can also experience curiosity, envy, jealousy, generosity, and more, by building models of our own minds, and the minds of others. Anthropologists seem to be converging on the opinion that the human mind largely evolved as our counterpoint to the peacock’s tail,… as a means to attract a mate and achieve higher social status.

      When an organism feels something, it is aware of what it is feeling, which also implies it is conscious of what it feels. Of course, it also helps if an organism can take some effective action, in response to what it is feeling/aware/conscious of. Our memory allows us to experience our sense of self out over long expanses of time.

      Saying the same thing in a different fashion,… if an entity can feel nothing, then what would consciousness ever mean to it? Is having a distinct body necessary to attain a sense of consciousness then? Is an ever present task, such as the struggle to survive, prosper, and propagate, necessary for complex and nuanced consciousness, such as our own?

      • mambo_bab

        In a simple term it is difficult to distinguish between consciousness and awareness. And for both case the theory: [association to prediction] process is based on [input to memory] should be functional.

  • Amit

    “I’d like to add one more consideration, though. What if two agents have roughly equal ability to optimize the world according to their preferences, but the second agent requires far more resources to do so? These agents have the same optimization power, but the second one seems to be optimizing more intelligently.”

    The second agent requires far more resources, and is therefore optimizing more intelligently? Isn’t it the opposite?

    • http://CommonSenseAtheism.com lukeprog

      Fixed, thanks!

As featured in:     Business Insider   NPR   Popular Science   New Statesman   Harpers Magazine