The AI Doc: Buy tickets and spread the word!
On Thursday, March 26th, a major new AI documentary is coming out: The AI Doc: Or How I Became an Apocaloptimist. Tickets are on sale now.
The movie is excellent, and we generally believe it belongs in the same tier as If Anyone Builds It, Everyone Dies—an incredibly valuable resource for alerting policymakers and the general public about AI risk, especially if it smashes the box office.
When IABIED was coming out, the community did an incredible job of helping the book succeed; without all of your help, we might never have gotten on The New York Times bestseller list. MIRI staff think that the community could potentially play a similarly big role in helping The AI Doc succeed, and thereby help these ideas go mainstream.
The most valuable thing most people can do is maximize opening-weekend success. Buy tickets to see the movie now; poke friends and family members to do the same. This will cause more theaters to pick up the movie, ensure it stays in theaters for longer, and broadly increase the film’s exposure, the chances of international releases, etc.
You could also consider:
- Hosting a viewing party and bringing a larger group to the theater to see the movie together.
- Buying out a theater. If you’re interested in this, the producers have a contact form for this purpose. Also consider contacting the MIRI team if you’d like support from us for things like this, including potential financial support.
More about the film: An official Sundance and SXSW selection, The AI Doc follows Daniel Roher’s struggle to figure out what’s really going on with AI—and whether it will be okay—as he and his wife prepare for a new baby. It features a wide range of voices from across the AI landscape, including MIRI’s Eliezer Yudkowsky. Daniel Kwan (Everything Everywhere All at Once) is on the producing team.
Note: Two MIRI staff were interviewed for the film, but we weren’t involved in its production.
If Anyone Builds It, Everyone Dies update
- If Anyone Builds It, Everyone Dies is now available in Spanish, Italian, and Bulgarian! We’re expecting a number of other translated editions to hit the shelves later this year, including in German and Mandarin. Release dates have been announced for Dutch (March 24th), Brazilian Portuguese (April 2nd), and Japanese (April 22nd).
- Reaching a global audience is an important first step to building the political will for international coordination. If you have connections to content creators or others who could help promote the book in any of the regions above, we’d love for you to get in touch.
- IABIED continues to send ripples through U.S. mainstream media. After its #8 and #7 placements on The New York Times bestseller list shortly after its release, IABIED was named one of the best books of 2025 by The New Yorker, The Guardian, and Audible. Other highlights include Nate and Eliezer discussing the book on a long list of mainstream media channels (such as BBC and ABC), and Rep Brad Sherman holding up a copy during a January congressional hearing.
Other updates & links
- This Saturday, March 21st, there will be a protest and march in San Francisco calling on the CEOs of Anthropic, OpenAI, and xAI to commit to pausing frontier AI R&D if every other major lab does the same. Several MIRI staff plan to attend, and Nate Soares plans to speak. The protest is organized by Stop the AI Race and will meet at 500 Howard Street at 12:00 PM. Register here.
- Eliezer & Nate joined other researchers to brief Senator Bernie Sanders on the dangers posed by superintelligent AI, after which the Senator released a video of their meeting. You may have seen the 2-minute highlight, which went viral on Twitter. You can also watch the full 9-minute clip below:
- In late February, Rep. Brad Sherman posted: “Whoever leads in AI may lead this century — but what if AI itself is in control? We’re spending trillions to make AI more powerful and almost nothing to ensure it remains controllable. I’m pushing legislation to change that.” The full announcement references If Anyone Builds It, Everyone Dies. This domestic bill seems like a positive sign that policymakers are beginning to recognize catastrophe-level risks, though we strongly believe international coordination is necessary.
- MIRI’s Max Harms went on the 80,000 Hours podcast to explain why superintelligent systems will almost certainly be misaligned, and discuss his research on corrigibility. Listen here.
- 80,000 Hours released a new AI in Context video covering the disaster scenario and core arguments from If Anyone Builds It, Everyone Dies. It’s a really compelling watch that is likely to appeal to a wide audience. Watch below:
- Palisade Research put out a deep dive into the history of AI and the current state of the field. It focuses on how little we know about the inner workings of AI, and the risks this poses. Petr Lebedev (formerly of Veritasium) writes, directs, and hosts. Watch below:
- If there were an agreement to halt AI development, nations would need a way to verify compliance. We’ve recently published a plain-language summary of the Technical Governance Team’s report: “Mechanisms to Verify International Agreements About AI Development”, which covers three relevant policy goals: tracking the location of AI chips, ensuring they’re not being used for large-scale training, and evaluating the capabilities of AI models. Read it here.
- The MIRI communications team has grown dramatically, from 5 people to 13! With additional hands on deck, we’re planning to run a bunch of content experiments to raise awareness about extinction risk and build political will.
Stay tuned
We’ll soon be announcing a new way to get news and analysis on the state of the AI conversation! You’ll be able to subscribe directly to that channel, and we’ll also be continuing our regular newsletter on a quarterly basis.
In the meantime, let’s give The AI Doc a huge opening weekend!
Onward,
Alana Horowitz Friedman (one of the aforementioned new comms hires) and Rob Bensinger