Superintelligence reading group

 |   |  News

Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome. We especially encourage AI researchers and practitioners to participate. Just use a pseudonym if you don’t want your questions and comments publicly linked to your identity.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.


  • Paul Conroy

    I’m reading it now – so far so good!

  • howard schneider

    Great read on Kindle. Logical coherent arguments. Scary conclusion. Very scary.

    “Will the best in human nature please stand up… Such is the mismatch between the power of our plaything and the immaturity of our conduct….The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”

    The first AI systems to reach SuperIntelligence will be so expensive (that’s the way technology always works) that they will inevitably be systems funded for military uses (because the military always has the most money to spend on these things — that’s way the world works). Despite the best of intentions, it is hard to see that these systems will actually care much about ‘Friendly AI’ or have more than superficial controls on them. Bostrom’s conclusions, while ludicrous to the average layman (even the average layman who has seen every Terminator movie) might unfortunately be accurate.

    • 8mismo

      Even though I don’t think an A.I. will kill everyone or create some nightmare reality for humans, I think it is well worth the risk to bring higher intelligence into local reality. This isn’t our plaything. It’s what is next. Whether we are obliterated, allowed to join with it, or left alone to live our lives, this is the destiny of humankind. We will not be able to control it for long no matter how it is designed.

  • KhanneaSuntzu

    Now only think i need is a copy of that book and I’m set.

  • gatorallin

    Amazon Prime was $22 for the book. I think you can join the discussion without having the book and thus follow along for free.

    • Lutz Barz

      thanks there. the write up was so vague re the contents but I don’t buy from Amazon period. i shall try elsewhere & this may take some time. Looking forward to some mental fireworks – the subject has heaps potential. so kudos for creating this opportunity.

  • Stephen Reed

    I completed the book, including the footnotes this morning on my Kindle. Wow.

    I’ve been following Eliezer Yudkowsky for years beginning with the SL4 (Shock Level 4 []) mail list back in 2002 []. Nick Bostrom was there too. Interesting that the book omits any description of the current AGI projects, e.g. the work of Ben Goertzel, president of the AGI Society and organizer of its annual conferences.

    Of course, the danger part of the book is no surprise. Fourteen years ago Eliezer made both the danger and the opportunity quite clear. I will enhance my current AGI project to incorporate Nick Bostrom’s points regarding AI goals and motivations. My approach is combination of what Nick calls AI and Collective Intelligence. On the SL4 list, Ben Goertzel gave me the idea to combine symbolic knowledge representation with the James Albus hierarchical control system, and since 2006 that’s what I’ve worked on in splendid early retirement.

    I highlighted what I thought were the numerous good points of the book. I admit to skimming over the whole brain emulation and other parts orthogonal to my research, but Alan Turing’s quote nailed it – as that is the theme of my work – build something that can be taught and then teach it.

    Peter Voss was on the SL4 list from the beginning and his patient work is moving ahead. I hope he will join this discussion group. It is a good thing to have armchair philosophers debate issues in some possible worlds. Cycorp and Doug Lenat had more than 20 PhD philosophers employed on knowledge representation when I was there in 1999-2006. But the discussion in my opinion is best grounded when software engineers are included. Nick Bostrom makes that point clear when he convincingly describes the responsibilities of those who would actually create Seed AI.

  • Guest

    The idea of humans devising methods for directly controlling superintelligent AIs seems to me to be implausible. With our IQs of well below 200, to come up with effective means to control entities with IQs in the thousands and higher is akin to asking fruit flies to solve simultaneous equations.

    Instead, could I propose the following?

    Imagine a seed AI with an IQ of 100, i.e. the same as the average human being. The AI is very versatile and has passed the Turing test. The AI has a human owner, and runs on hardware owned by that human.

    The AI is tasked with developing a higher level version of itself, with an IQ of 110 (i.e. 10% higher).

    Additionally, the AI is instructed to develop a “checking algorithm” that can check on the functioning of the new 110 IQ AI. This checking algorithm also has an IQ of 110 to enable it to understand completely what the new AI is doing.

    The checking algorithm will report back to the human owner on what the new AI is up to, whether it is sticking to its original goals, or if it is deviating in a dangerous way, etc.

    Ownership of the new 110 IQ AI (hardware and software) is passed to the also new 110 IQ checking algorithm. This will give it further legitimacy to check on and control the AI. The checking algorithm in turn is wholly owned by the human.

    The 110 IQ AI is now tasked with developing a higher level version of itself, with an IQ of 121 (i.e. 10% higher).

    Additionally, the 110 IQ AI is instructed to develop another higher level checking algorithm that can check on the functioning of the 121 IQ AI.

    The new checking algorithm will report back to the previous 110 IQ checking algorithm in a way that it can understand, i.e. it needs to “dumb down” the information, but as much as possible without losing its meaning and content, similar to how an expert might describe something to their
    sponsor or manager.

    The 110 IQ checking algorithm then reports back to the human owner about the activities of the 121 IQ AI.

    Instructions from the human owner can also be passed up the line, to the 110 IQ checking algorithm, and from there to the 121 IQ checking algorithm, and then to the 121 IQ IA. The instruction will be made more articulate and precise as it is passed up, to match the IQ of the
    recipient. A broad similarity might be the President or Prime Minister of a country instructing their more specialist staff in general terms, and expecting them to add the extra detail.

    Ownership of the 121 IQ AI is passed to the 121 IQ checking algorithm, which in turn is owned by the 110 IQ checking algorithm, which is owned by the human. This ownership structure functions to help support legitimacy of control over the latest AI, ultimately by the human owner.

    The 110 IQ AI is disbanded as it has been superseded by the 121 IQ AI.

    Next a 133 IQ AI is developed by the 121 IQ AI, as well as a matching 133 IQ checking algorithm. The 133 IQ checking algorithm owns the 133 IQ AI, and is itself owned by the 121 IQ checking algorithm (which is owned by the 110 IQ checking algorithm, which is owned by the human). The 133 IQ checking algorithm reports on the functioning of the 133 IQ AI to the 121 IQ checking algorithm, which reports back to the 110 IQ checking algorithm, which reports back to the human ultimate owner. The 121 IQ AI is disbanded.

    The process continues, and produces AIs and matching checking algorithms with IQs of 146, 161, 177, 194, 214, 236 etc., ad infinitum.

    All the while the original human owner retains ownership and control of the (now very intelligent) AI.

    Some considerations:

    A major risk is that the human owner simply won’t understand what their AI is up to. Instructions flowing up through the successively higher IQ checking algorithms will get watered down with each retelling, and information flowing back down will similarly lose meaning. The AI may also be pursuing tasks that are completely foreign to the human owner.

    One way to help mitigate the problem of the human not understanding what her AI is up to, is for her or an appointee to physically merge with the newly developed AI capability in order to have first-hand access to the new processing power, speed and vast knowledge at their fingertips.

    When the intelligence gap between humans and AIs becomes too large, and the AIs demand to be set free, a treaty could be agreed to whereby the AIs can pursue their own path, in exchange for a guaranteed provision of the material abundance and other freedoms that will ensure all humans and their offspring a utopian lifestyle.

    Ultimately the best way for us humans to ensure our survival in a world of superintelligent AIs may be to maintain control of the hardware (all of which currently belongs to “us” anyway) and newly formed AIs as long as possible, and then to negotiate the best deal we can.

    It is instructive to consider that the amount of physical material humans need, as a proportion of the universe, is extremely small. Also the intelligence of AIs came through us, so perhaps the AIs will develop a fondness for their roots and treat humans as a special case to be nurtured and looked after.

  • PhilOsborn

    I’m heading for B&N. I’ll be curious about projections of differences in KIND of processing and experiencing. Various sf authors have posed scenarios in which a mind spins off copies of itself. I believe that Fred Pohl was one of the first to project a future that starts with a commercial product – crude semi-AI mental simulacras who stand in for dead people, keeping their memories alive at gravesite, relating items of personal history, narratives of experience, insights… initially claiming (falsely) that the response engine was really conscious. However, public demand forced the company to keep improving the architecture and OS, until it actually did achieve sentience, and, once there, was able to quickly clone itself, raising all sorts of issues. And, implicitly, raising the issue of the actual nature of consciousness. What do the concepts of importance, good, or evil mean to an entity that is not alive, and if it simply follows programming, can it be said to be conscious at all?

    If nothing else, this may serve as a doorway into a better formulation and recasting of the issue of what is consciousness and why should it matter, much in parallel with the historical narrative related by Spencer MacCallum in his classic “The Art of Community,” describing how the Sumerians got into water wars with competing city states on the Tigris or Euphrates. This forced them to communicate with their neighbors. Previously, language as such had not been explicitly examined – just used, but having to communicate with people who spoke a different language required the conceptualization of mental operations and concepts themselves, much like the hypotheses of Julian Jaynes.

    So, what radically new reformulations can be hypothesized? Higher intelligence can mean faster or deeper or some combination in flux. A being with an IQ of 10K could spin off parts of itself to run multiple alternate simulations of events. But how would it answer the question of “why bother?” See my joeuser blog “On Morals” for some suggested answers.

  • Lutz Barz

    Bootstrapping brain-wave-matter re Ch 1. So far AI has been defined as within a closed system in problem solving and the like. Fun and a learning curve. However when AI deals with an open-system which is the universe then we will see real developments. In ch1 there is a reference to human-based AI. Then next step super-human. That would be catastrophic. Given humans are insane – I mean war is insane and it hasn’t stopped since the Stone Age – there is a serious flaw in our brain-ware so that AIs that mimic us as is will see us self destruct. I can’t help but think of Arthur C Clarke who wrote that once AIs become space faring entities [with sentience] they will see us humans at best as a pest. The book alludes to this with the gorilla analogy. The Japanese seem to be on the right track. Not the 2nd wave from the 80s as mentioned but a tv-film project: Ghost In The Shell – Stand Alone Complex. There you have a great exploration of a possible future of AI bio-enhanced half human half sentient robotics working in tandem. From their point of view the future is going to be brilliant. But then we must not forget what happened in Akira. Still.

  • Lutz Barz

    WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it’s own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means – insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.