Nate and Eliezer’s forthcoming book has been getting a remarkably strong reception.
I was under the impression that there are many people who find the extinction threat from AI credible, but that far fewer of them would be willing to say so publicly, especially by endorsing a book with an unapologetically blunt title like If Anyone Builds It, Everyone Dies.
That’s certainly true, but I think it might be much less true than I had originally thought.
Here are some endorsements the book has received from scientists and academics over the past few weeks:
This book offers brilliant insights into the greatest and fastest standoff between technological utopia and dystopia and how we can and should prevent superhuman AI from killing us all. Memorable storytelling about past disaster precedents (e.g. the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don’t see the catastrophes they create.
—George Church, Founding Core Faculty, Synthetic Biology, Wyss Institute, Harvard University
A sober but highly readable book on the very real risks of AI. Both skeptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful.
—Bruce Schneier, Lecturer, Harvard Kennedy School
A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.
—Ben Bernanke, Nobel-winning economist; former Chairman of the U.S. Federal Reserve
George Church is one of the world’s top genetics researchers; he developed the first direct genomic sequencing method in 1984, sequenced the first genome (E. coli), helped start the Human Genome Project, and played a central role in making CRISPR useful for biotech applications.
Bruce Schneier is the author of Schneier on Security and a leading computer security expert. He wrote what was, as far as I can tell, the first ever applied cryptography textbook, which had a massive influence on the field.
We have some tangential connections to Church and Schneier. But Ben Bernanke? The Princeton macroeconomist who was the chair of the Federal Reserve under Bush and Obama? He definitely wasn’t on my bingo card.
I was even more surprised by the book’s reception among national security professionals:
A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.
—Jon Wolfsthal, former Special Assistant to the President for National Security Affairs; former Senior Director for Arms Control and Nonproliferation, White House, National Security Council
While I’m skeptical that the current trajectory of AI development will lead to human extinction, I acknowledge that this view may reflect a failure of imagination on my part. Given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.
—Lieutenant General John (Jack) N.T. Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center
I wish this wasn’t real. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.
—R.P. Eddy, former Director, White House, National Security Council
The authors raise an incredibly serious issue that merits—really demands—our attention. You don’t have to agree with the predictions or prescriptions in this book, nor do you have to be tech or AI savvy, to find it fascinating, accessible, and thought-provoking.
—Suzanne Spaulding, former Under Secretary, Department of Homeland Security
Jack Shanahan is a retired three-star general who helped build and run the Pentagon’s Joint AI Center—the coordinating hub for bringing AI to every branch of the U.S. military. He’s a highly respected voice in the national security community, and various folks I know in D.C. expressed surprise that he provided us with a blurb at all.
Suzanne Spaulding is the former head of what’s now the Cybersecurity and Infrastructure Security Agency (CISA), the U.S. government’s main agency for cybersecurity and for critical infrastructure security and resilience. Per her bio, she has “worked in the executive branch in Republican and Democratic administrations and on both sides of the aisle in Congress” and was also “executive director of two congressionally created commissions, on weapons of mass destruction and on terrorism.”
R.P. Eddy served as Director of the White House National Security Council during Bill Clinton’s administration, and has a background in counterterrorism.
Jon Wolfsthal was the Obama administration’s senior nuclear security advisor, and now serves as the Director of Global Risk at the Federation of American Scientists, a highly respected think tank. He also enthusiastically shared our book announcement on LinkedIn.
The book also got some strong blurbs from prominent people who were already on the record as seriously worried about smarter-than-human AI. Max Tegmark tweeted that he thought this was the “most important book of the decade.” Bart Selman, a Cornell Professor of Computer Science and principal investigator at the Center for Human-Compatible AI, wrote that he felt this was “a groundbreaking text” and “essential reading for policymakers, journalists, researchers, and the general public.” Other positive blurbs came from people like Huw Price, Tim Urban, Scott Aaronson, Daniel Kokotajlo, and Scott Alexander. These folks have been fighting the good fight for years if not decades. The book team has assembled a collection of these endorsements and others[1] at ifanyonebuildsit.com/praise.
I personally passed advance copies of the manuscript to many people (including many of those above) over the past couple of months, and it wasn’t all enthusiasm. For example, among members of the U.S. legislative and executive branches (and their staff) that I’ve spoken to in recent weeks, many have declined to comment publicly, often despite giving private praise. We still have a long way to go before a critical mass of policymakers and public servants feel ready to discuss this issue openly.[2] But it seems meaningfully more possible to me that we can get there.
When I read a late draft of the book, I remember thinking that it turned out pretty good. But only in the last couple of months, as reactions have started to come in from early readers, has it started to sink in for me that this book might be a game-changer. Not just because the book is a timely resource about an urgent issue, but because it’s actually landing.
If you’re excited by this and want to support the book, then preordering—and encouraging others to preorder—continues to be exceptionally helpful.[3] It’s still far from certain that the book is going to make bestseller lists, but it’s off to a very promising start. Some of the publisher’s folks expressed surprise at the strong early preorders, and they already expected the book to be a big deal. It remains to be seen how much we can keep up the momentum, vs. how much of that is due to an initial and unreplicable boost. We have already done one event exclusive to people who have preordered the book (a Q&A that Nate gave at the LessOnline conference) and are planning to do more such events online.
Aside from that, we’re now starting to line up media engagements (podcasts, YouTube videos, interviews, etc.) for release when (and shortly after) the book comes out in mid-September. If you run a major podcast (or some other form of media with a large audience) or you can put us in touch with someone who does, please reach out to us at media@intelligence.org.
The team at MIRI is humbled and grateful that so many serious voices in science, academia, policy, and other areas have come forward to help signal-boost this book.
It seems like we have a real chance. LFG.
- The book has also received endorsements from entertainers like Stephen Fry, Mark Ruffalo, Patton Oswalt, and Grimes; from Yishan Wong, former CEO of Reddit; from Emmett Shear, who was briefly the interim CEO of OpenAI; and from additional early reviewers. ↩
- We’re quite limited in how many advance copies we can circulate, but consider DMing us if you can put us in touch with especially important public figures, policy analysts, and policymakers who should probably see the book as soon as possible. ↩
- If you want to buy an extra copy (or, e.g., 5 or 10 copies) for friends and family who would be interested in reading the book, that’s potentially a big help too.Nate’s previous post warned that “bulk purchases” don’t help in a way that worried some people unduly. To clarify what that means, based on our current understanding: orders in the range of 5–10 are generally fine. We think it’s a bad idea to buy copies purely to inflate the sales numbers. (Though if you have a legitimate reason for wanting to preorder a large amount of copies, e.g., for an event or something, reach out to us at contact@intelligence.org and we can direct you to the proper channels to make such preorders.) But we encourage you to buy copies for friends and family if you want to. Ultimately, you should buy as many books as you want to actually buy and use, and not buy books if they’re just going to end up in dumpsters or as monitor stands. ↩