When asked their opinions about “human-level artificial intelligence” — aka “artificial general intelligence” (AGI) — many experts understandably reply that these terms haven’t yet been precisely defined, and it’s hard to talk about something that hasn’t been defined. In this post, I want to briefly outline an imprecise but useful “working definition” for intelligence we tend to use at MIRI. In a future post I will write about some useful working definitions for artificial general intelligence.
Imprecise definitions can be useful
Precise definitions are important, but I concur with Bertrand Russell that
[You cannot] start with anything precise. You have to achieve such precision… as you go along.
Physicist Milan Ćirković agrees, and gives an example:
The formalization of knowledge — which includes giving precise definitions — usually comes at the end of the original research in a given field, not at the very beginning. A particularly illuminating example is the concept of number, which was properly defined in the modern sense only after the development of axiomatic set theory in the… twentieth century.
For a more AI-relevant example, consider the concept of a “self-driving car,” which has been given a variety of vague definitions since the 1930s. Would a car guided by a buried cable qualify? What about a modified 1955 Studebaker that could use sound waves to detect obstacles and automatically engage the brakes if necessary, but could only steer “on its own” if each turn was preprogrammed? Does that count as a “self-driving car”?
What about the “VaMoRs” of the 1980s that could avoid obstacles and steer around turns using computer vision, but weren’t advanced enough to be ready for public roads? How about the 1995 Navlab car that drove across the USA and was fully autonomous for 98.2% of the trip, or the robotic cars which finished the 132-mile off-road course of the 2005 DARPA Grand Challenge, supplied only with the GPS coordinates of the route? What about the winning cars of the 2007 DARPA Grand Challenge, which finished an urban race while obeying all traffic laws and avoiding collisions with other cars? Does Google’s driverless car qualify, given that it has logged more than 500,000 autonomous miles without a single accident under computer control, but still struggles with difficult merges and snow-covered roads?
Our lack of a precise definition for “self-driving car” doesn’t seem to have hindered progress on self-driving cars very much. And I’m glad we didn’t wait to seriously discuss self-driving cars until we had a precise definition for the term.
Similarly, I don’t think we should wait for a precise definition of AGI before discussing the topic seriously. On the other hand, the term is useless if it carries no information. So let’s work our way toward a stipulative, operational definition for AGI. We’ll start by developing an operational definition for intelligence.
Read more »