If Anyone Builds It, Everyone Dies
- As we announced last month, Eliezer and Nate have a book coming out this September: If Anyone Builds It, Everyone Dies. This is MIRI’s major attempt to warn the policy world and the general public about AI. Preorders are live now, and are exceptionally helpful.
- Preorder Bonus: We’re hosting two exclusive virtual events for those who preorder the book! The first is a chat between Nate Soares and Tim Urban (Wait But Why) followed by a Q&A, on August 10 @ noon PT. The second is a Q&A with both Nate and Eliezer in September (date and time TBD). For details, and to obtain access, head to ifanyonebuildsit.com/events.
- If you have graphic design chops, you can give us a hand by joining the advertisement design competition for the book.
- As Malo recently announced, advance copies of the book have been getting an incredible reception, with blurbs from:
- computer security guru Bruce Schneier;
- Suzanne Spaulding, former head of the DHS Cybersecurity and Infrastructure Security Agency, the US government’s main agency for cybersecurity and critical infrastructure security;
- Jack Shanahan, a retired three-star general and the inaugural director of the Pentagon’s Joint AI Center, the coordinating hub for bringing AI to every branch of the US military;
- Jon Wolfsthal, the Obama administration’s senior nuclear security advisor;
- legendary biologist George Church;
- Nobel-winning economist Ben Bernanke;
… and many others.
- Nate has written a follow-up post arguing that the time is ripe to speak the truth about AI danger.
Other MIRI updates
- In a new research agenda, MIRI’s Technical Governance Team describes the strategic landscape of AI governance and identifies questions that, if answered, could provide insights for preventing an AI catastrophe.
- Announcing The Problem — MIRI’s new go-to explainer for AI x-risk (at least until our book comes out). We expect this to be an important resource for people new to this topic, and we’ve launched it alongside major updates to our website.
- Eliezer visited Robinson Erhardt’s podcast, Nate was interviewed by Will Cain on Fox News, and Malo Bourgon went on Liv Boeree’s podcast.
- The Technical Governance Team submitted recommendations in response to the RFI on the Development of an AI Action Plan.
- Martin Lucas joined the operations team at MIRI. Previously, Martin was on the ops team at Anthropic. We’re excited to have him aboard!
- In the new MIRI Single Author Series, authors share their individual perspectives on topics relevant to MIRI’s mission. We have published several new pieces as part of this series:
- Max Harms shares some of his thoughts on AI 2027, a scenario by Daniel Kokotajlo’s team.
- In Refining MAIM: Identifying Changes Required to Meet Conditions for Deterrence, David Abecassis discusses limitations of the deterrence strategy in Superintelligence Strategy.
- In So You Want to Work at a Frontier AI Lab, Joe Rogero explains why he doesn’t buy the arguments for working at frontier labs.
- In Takeover Not Required, Duncan Sabien argues that society is likely to willingly hand over control to ASI.
- In a response to OpenAI’s “How we think about safety and alignment,” Harlan Stewart raises objections to OpenAI’s strategy and messaging.
- Joe Collman, Joe Rogero, and William Brewer argue that there are dangerous gaps in AI labs’ safety frameworks, reflecting the industry’s systematic overconfidence.
Harlan Stewart and Rob Bensinger