Three Major Singularity Schools

 |   |  Analysis

I’ve noticed that Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.

  • Accelerating Change:
    • Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.
    • Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.
    • Advocates: Ray Kurzweil, Alvin Toffler(?), John Smart
  • Event Horizon:
    • Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.
    • Strong claim: To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.
    • Advocates: Vernor Vinge
  • Intelligence Explosion:
    • Core claim: Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.
    • Strong claim: This positive feedback cycle goes FOOM, like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of >1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
    • Advocates: I. J. Good, Eliezer Yudkowsky

The thing about these three logically distinct schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.

If you extrapolate our existing version of Moore’s Law past the point of smarter-than-human AI to make predictions about 2099, then you are contradicting both the strong version of the Event Horizon (which says you can’t make predictions because you’re trying to outguess a transhuman mind) and the strong version of the Intelligence Explosion (because progress will run faster once smarter-than-human minds and nanotechnology drop it into the speed phase of transistors).

I find it very annoying, therefore, when these three schools of thought are mashed up into Singularity paste. Clear thinking requires making distinctions.

But what is still more annoying is when someone reads a blog post about a newspaper article about the Singularity, comes away with none of the three interesting theses, and spontaneously reinvents the dreaded fourth meaning of the Singularity:

  • Apocalyptism: Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.

I’ve heard (many) other definitions of the Singularity attempted, but I usually find them to lack separate premises and conclusions. For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down. But what makes this an interesting point in history apart from its definition? What are the consequences of this assumption? To qualify as a school of thought or even a thesis, one needs an internal structure of argument, not just a definition.

If you’re wondering which of these is the original meaning of the term “Singularity,” it is the Event Horizon thesis of Vernor Vinge, who coined the word.

  • Cosmic Vortex

    Nice essay. I understand what you are saying about the 3 different projections contradicting each other, I think the odds that they do are actually very small.
    The only way it would be possible for these views to contradict would be if the superintellgence that emerges chooses not to continue on Moore’s path. While this is certainly possible, I think its very improbable that it would choose to do so. I think theres at least one reasonably safe assumption we can make about the intelligence beyond the singularity… it will continue to expand its intelligence and power. This will mean continuing on Moore’s path at least until it gets a technology that can replace it altogether. If thats a contradiction between views, I think it’s a bit too small to count.

  • Pingback: Education Futures » Three Singularities, three conversations()

  • Joshua Fox

    Great clarification of these concepts.

    By the way, the word is “Apocalypticism” rather than “Apocalyptism.” Not a big deal, but considering the common mangling of “Singularitarianism,” a similar mouthful, it’s best to get the term right.

  • http://joeduck.wordpress.com Joe Hunkins

    A nice post, though I’m not clear on why these are incompatible ideas. I’m guessing Kurzweil would agree that an intelligence explosion is likely, though he seems to think it’ll take some years of conscious computing before that happens (you don’t?), and most would agree that the future is *very* difficult to predict even without a singularity, so the event horizon is also a reasonable assertion in the other two scenarios.

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    Accelerating Change is a bit different from the two others, but I’m not sure why there’s a need to differentiate between Event Horizon and Intelligence Explosion. Intelligence Explosion, at least, seems to lead to Event Horizon (though not necessarily vice versa – but then, I don’t think I’ve heard of an Event Horizon formulation that wouldn’t have included Intelligence Explosion at least implictly).

    The core claim of Accelerating Change supports the two others in the “why should we belive this should happen soon”, though its strong claim is more dubious.

  • http://transhumangoodness.blogspot.com/ Roko

    Kaj said: “Accelerating Change is a bit different from the two others, but I’m not sure why there’s a need to differentiate between Event Horizon and Intelligence Explosion.”

    I’ll second that, Kaj.

  • Jeffrey Herrlich

    I’m an Intelligence Explosioner too. I don’t favor the Accelerating Change much because it’s a speculative extrapolation of *human* ability. Whereas I believe that the first Strong AI will be of a huge qualitative difference. And I don’t favor the Event Horizon, in the sense that I believe that the “objectives” of the future *can* be predicted and guided – hence the Friendly AI thesis. It’s only the means to the ends that *perhaps* can’t be reasonably predicted by us humans. But fortunately a Friendly AI will be a super-expert at strategizing ways to effectively avoid violating its stable goals.

    • Cesium

      Jeff — Why? Processing power is processing power. It doesn’t matter if it is in vivo or in silico. Distributed processing and shared memory processing are isomorphic. Why will “Strong AI” be qualitatively different?

  • http://hanson.gmu.edu Robin Hanson

    OK, for an analogy, we driving down a mountain and:
    1) we can see that the road ahead steadily gets steeper, or
    2) the road ahead is straight, but then passes into clouds, or
    3) we clearly see both a cliff ahead, and a vertical drop below
    4) we are just warned “beware of something weird ahead”
    To which I’d add:
    5) we see the road ahead suddenly turns and gets much steeper.

  • TONY COHEN

    20 years ago, if you were a ‘naked’ Singularity,

    you’d be at a Black Hole”s Event Horizon about to

    be sucked in.

  • http://www.acceleratingfuture.com/tom Tom McCabe

    “This will mean continuing on Moore’s path at least until it gets a technology that can replace it altogether.”

    What is the probability that a superintelligence would develop chips at *exactly* the same speed that humans do? Not even twice as slow, or twice as fast? Consider that the difference in raw FLOPS is going to be several orders of magnitude and up.

  • http://zbooks.blogspot.com Zubon

    Don’t the three lead to one another? Accelerating Change is the source of the Intelligence Explosion, as the curve becomes increasingly steep. This leads to the Event Horizon, since we cannot predict beyond an Intelligence Explosion.

    Intelligence Explosion may wear out the “smooth curve” part of Accelerating Change, but I don’t think Mr. Kurzweil is that strongly committed to it. If nothing else, we are usually slightly off the curve, and “slightly off” becomes “way off” pretty quickly when we increase at x^y^z. And unless you believe that you can predict what a superinteligence will do, Eliezer, Intelligence Explosion implies an Event Horizon.

  • Pingback: Life, the Universe, and Everything » Objections to SIAI/AGI/FAI()

  • Pingback: Accelerating Future » IEEE Spectrum Special Issue on the Singularity()

  • Russell123

    Interesting article, but its unclear which one leads to robot sword fights in outer space.

    http://www.youtube.com/watch?v=DDrbbe7lgyE&feature=related

    My guess would be apocalypticism.

  • Pingback: The Singularity Institute Blog : Blog Archive : IEEE Spectrum special report on the Singularity()

  • Greg

    I would like to favour the Exploding Intelligence theory for future technological change, with a mixture of both the core claim and the strong claim, If human intelligence could match pace? I don’t think it could just by been augmented, if there were other pure super intelligences computer based. For human intelligence to keep pace it would have to go like Kurzweil thinks where humans upload there minds to a computer or android but would the new person recognize themselves as the same person as their former selves. If the former human version still was alive after transfering their mind to an android, wouldn’t it be like identical twins. While knowning that eachother are very similar, they both recognize eachother as different persons. Not one and the same, but if a human uploaded their minds to an android and then the instant it was activated the human was killed or at least put into some kind of suspended animation, then the android might be able to recognize themself as the former human.
    Another way might be to try and keep the knowledge of one version of the person from the other, but how could the original human forget that they had just uploaded. The knowledge of uploading their minds along with all other knowledge would be transfered as well to the android. Unless this information could be deleted from both. It might be possible for the two versions to exist at the same time until the human version died naturally then of course the android version would have evolved a lot different and faster than the human version.and would have evolved into soneone entirely different but still having knowledge from its human past.
    If humans at that time knew that the uploaded version of themselves became someone entirely different and their own conscious was not transfered then they might not bother to transfer.
    This type of question will need to be addressed, either starting now or some time in the near future when more knowledge is available.

  • Pingback: BrainsLab.net » Blog Archive » Eliezer on the singularity()

  • Cesium

    The core claims of the Event Horizon and Intelligence Explosion schools of thought are hugely flawed.

    Event Horizon says “technology will advance to the point of improving on human intelligence”.

    Intelligence Explosion says “If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans”.

    The problem is that superhuman intelligence already exists. Putting a man on the moon required more than one person’s mental abilities. Building modern cpu processors consumes more than one person’s intelligence.

    We don’t sit around waiting for someone to invent a super human intelligence and then everything goes “foom”. We are currently in the rapid feedback cycle where our very modest artificial processing power contribute to coordinating global human processing power to build slightly more powerful artificial processing power.

    These are not three separate schools. Accelerating Change and Intelligence Explosion are tightly coupled. Event Horizon is completely orthogonal and asks whether or not one can sensibly talk about what the world will look like after a sufficient amount of change.

    The strong claims are not in conflict. Accelerating Change says that we can confidently predict that new technologies will arrive by certain periods of time. It does not specify what those technologies are. Intelligence Explosion says that one of the areas that will be changed by Accelerating Change is the amount of processing power available to humans.

    Together, Accelerating Change and Intelligence Explosion state that we can confidently predict that we will someday have a *lot* of processing power available.

    Event Horizon asks whether or not we can imagine what the world will be like once that much processing power is available.

    The timing of the Singularity needs to be taken with a great grain of salt. Current timings suggest that when we build a cubic-foot box of processing power that is as powerful as a human brain, things will get quite interesting. However, it doesn’t matter if computation is performed on a distributed computer or within a single square foot box. There are enough different problems to work on that distributing the processing of the problems around the world isn’t a problem.

    Thus, the point at which we build cubic-foot boxes that contain human equivalent intelligence is roughly the point at which we build, in one year, an amount of processing power equal to the processing power of existing humans. It’s the point at which we start doubling the total processing power available to humanity every couple of years. It will still take an additional 14 years or so to get “orders of magnitude more” processing power than is available to unaugmented humans.

    We also need to note that today, we are not utilizing much of the available human processing power. Roughly half the world population is well connected. Africa is poorly connected; education rates are low in many parts of the world; rural farmers are poorly connected. Improvements in education and the interconnectedness of people will grow human processing power much faster for most of the next few decades than building faster artificial processing power.

  • Pingback: Ambition, Intelligence, and Artificial Intelligence | B Frank()

  • Pingback: Eliezer on the singularity « Brains Lab()

  • Pingback: The End of Suffering? -Humanity+ UK 2010()

  • Pingback: La singularidad tecnolĂłgica: UtopĂ­a o fin de la humanidad | caracas 10N, 67W()