New report: “Leó Szilárd and the Danger of Nuclear Weapons”

 |   |  Papers

Today we release a new report by Katja Grace, “Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation” (PDF, 72pp).

Leó Szilárd has been cited as an example of someone who predicted a highly disruptive technology years in advance — nuclear weapons — and successfully acted to reduce the risk. We conducted this investigation to check whether that basic story is true, and to determine whether we can take away any lessons from this episode that bear on highly advanced AI or other potentially disruptive technologies.

To prepare this report, Grace consulted several primary and secondary sources, and also conducted two interviews that are cited in the report and published here:

The basic conclusions of this report, which have not been separately vetted, are:

  1. Szilárd made several successful and important medium-term predictions — for example, that a nuclear chain reaction was possible, that it could produce a bomb thousands of times more powerful than existing bombs, and that such bombs could play a critical role in the ongoing conflict with Germany.
  2. Szilárd secretly patented the nuclear chain reaction in 1934, 11 years before the creation of the first nuclear weapon. It’s not clear whether Szilárd’s patent was intended to keep nuclear technology secret or bring it to the attention of the military. In any case, it did neither.
  3. Szilárd’s other secrecy efforts were more successful. Szilárd caused many sensitive results in nuclear science to be withheld from publication, and his efforts seems to have encouraged additional secrecy efforts. This effort largely ended when a French physicist, Frédéric Joliot-Curie, declined to suppress a paper on neutron emission rates in fission. Joliot-Curie’s publication caused multiple world powers to initiate nuclear weapons programs.
  4. All told, Szilárd’s efforts probably slowed the German nuclear project in expectation. This may not have made much difference, however, because the German program ended up being far behind the US program for a number of unrelated reasons.
  5. Szilárd and Einstein successfully alerted Roosevelt to the feasibility of nuclear weapons in 1939. This prompted the creation of the Advisory Committee on Uranium (ACU), but the ACU does not appear to have caused the later acceleration of US nuclear weapons development.
  • http://www.dannen.com/ Gene Dannen

    Accurate facts are necessary to produce valid analysis and conclusions. However, many of the facts about Szilard in this article are wrong.

    An accurate study of the reception of Szilard’s ideas would have value. A substantial reworking of this article would be required to achieve that.

    Gene Dannen
    http://www.dannen.com

    • http://katjagrace.com Katja Grace

      That does sound problematic. Could you point to specific facts that are inaccurate?

      • http://www.dannen.com/ Gene Dannen

        For starters:
        Reason for secret patent
        Reasons he was disbelieved
        Prediction timelines
        Effectiveness of secrecy efforts
        Effectiveness of Einstein letter

        And FYI, Szilard never spelled his name Leó Szilárd after he left Hungary. His choice should be respected.

        • http://katjagrace.com Katja Grace

          What are the real prediction timelines like, on your account?

          • http://www.dannen.com/ Gene Dannen

            Certainly much earlier than your estimate of 1944-45.

            Szilard believed that one must prepare for the earliest possible onset of catastrophe, and take the possibility of unanticipated breakthroughs into account. In 1934, when he filed his first chain reaction application, a danger horizon of 5 years wouldn’t have been impossible. In the summer of 1939, I doubt that he expected more than 3 years of safety.

            Though these shorter estimates might make his example seem less applicable to your AI comparison, I think it’s more applicable than you realize, for other reasons.

  • RLoosemore

    The nuclear fission case involved relatively simple physics that unambiguously would cause a massive explosion if a chain reaction could be triggered.

    Szilard and others were NOT considering a kind of physics where the possibility of a dangerous explosion was only speculation, where the detailed mechanism was in doubt, and where thousands of pieces of as-yet-undiscovered physics would all have to turn a certain way in order for an uncontrollable threat to exist.

    The Artificial Intelligence case is dramatically different in all of those ways (as well as others). The mere fact that Szilard might have taken action to slow the development of the atomic bomb is of absolutely no relevance to the AI case. The only lesson seems to be:

    1) If the threat is simple and clear: take action.
    2) If it is not even remotely clear that there might be a threat, or what sort of threat it is, or how to do something about it: do …….. something, ….. maybe.

    Which, of course, is useless.

    • http://www.dannen.com/ Gene Dannen

      If you learned a little bit about the start of the nuclear age, I think you would change your opinion on all of those points.

      • RLoosemore

        Since I am a physicist, I find that I know plenty about the start of the nuclear age. Is it not possible to reply to my point by discussing the topic, without implying that the problem is my ignorance?

        • http://www.dannen.com/ Gene Dannen
          • RLoosemore

            Completely irrelevant to the point I originally made.

          • http://www.dannen.com/ Gene Dannen

            You seem impervious to new ideas. Goodbye.

          • RLoosemore

            Ah, I stand corrected! 🙂

            I didn’t realize that “irrelevant, abusive criticism” was a synonym for “new ideas”.

            My fault entirely.

      • RLoosemore

        The physics of nuclear danger were straightforward, for the following reasons. Once it was understood that the binding energy of one nucleus could be more than the total binding energy of two smaller nuclei that could be made by splitting the larger one, it became easy to calculate the differences, speculate about the possibility that a low-energy impact could cause the split, yielding net energy release. Once that was appreciated, the big question would be whether a cascade of such releases could occur as a result of single initial event — in other words, a chain reaction.

        Now, even before the parameters for causing a chain reaction were known (which required a great deal of knowledge of actual collision cross-sections, etc.) the POSSIBILITY that it was a feasible type of event was very, very significant.

        The specter of a dangerous chain reaction did not depend on anything like a massive chain of unknown science to be discovered between the first piece of understanding and the final conclusion.

        My point, here, is that the science behind AI friendliness and safety is completely different. In fact (see my paper in the AAAI Spring Symposium 2014) there are good grounds for believing that the AI dangers being described in this forum and in the media are based on the most ludicrous fantasies about future types of AI that have not been invented yet, and would not work even if they were invented.

        For that reason, talking about Leo Szilard’s dilemma in the AI context is worse than useless – it is profoundly misleading.