Our 2025 fundraiser is live!
Learn more or donate today ->

MIRI’s 2025 Fundraiser

MIRI is running its first fundraiser in six years, targeting $6M. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward.

 

Donate Today

 

We’ve raised $48,330.41 of our $10,000,000.00 goal (0.5%).

 

MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI.

Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster.

In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky’s book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies, with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence.

Millions have now viewed interviews and appearances with Eliezer and/or Nate, and the possibility of rogue superintelligence and core ideas like “grown, not crafted” are increasingly a part of the public discourse. But there is still a great deal to be done if the world is to respond to this issue effectively.

In 2026, we plan to expand our efforts, hire more people, and try a range of experiments to alert people to the danger of superintelligence and help them make a difference.

To support these efforts, we’ve set a fundraising target of $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised, thanks to a $1.6M matching grant), with a stretch target of $10M ($8.4M from donors plus $1.6M matching).

Donate here, or read on to learn more.


The Big Picture

As stated in If Anyone Builds It, Everyone Dies:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case, in the hope of rallying enough key decision-makers and regular people to take AI seriously. The default outcome is lethal, but the situation is not hopeless; machine superintelligence doesn’t exist yet, and its creation can yet be prevented.

The leading AI labs are explicitly rushing to create superintelligence. It looks to us like the world needs to stop this race, and that this will require international coordination. MIRI houses two teams working towards that end:

  1. A communications team working to alert the world to the situation.
  2. A governance team working to help policymakers identify and implement a response.

Activities

Communications

If Anyone Builds It, Everyone Dies has been the main recent focus of the communications team. We spent substantial time and effort preparing for publication, executing the launch, and engaging with the public via interviews and media appearances.

The book made a pretty significant splash:

The end goal is not media coverage, but a world in which people understand the basic situation and are responding in a reasonable, adequate way. It seems early to confidently assess the book’s impact, but we see promising signs.

The possibility of rogue superintelligence is now routinely mentioned in mainstream coverage of the AI industry. We’re finding in our own conversations with strangers and friends that people are generally much more aware of the issue, and taking it more seriously. Our sense is that as people hear about the problem through their own trusted channels, they are more receptive to concerns.

Our conversations with policymakers feel meaningfully more productive today than they did a year ago, and we have been told by various U.S. Members of Congress that the book had a valuable impact on their thinking. It remains to be seen how much this translates into action. And there is still a long way to go before world leaders start coordinating an international response to this suicide race.

Today, the MIRI comms team comprises roughly seven full-time employees (if we include Nate and Eliezer). In 2026, we’re planning to grow the team. For example:

  • We need someone whose job is to track AI developments and how the global conversation is responding to those developments, and help coordinate a response.
  • We need someone to assess and measure the effectiveness of various types of communications and arguments, and notice what’s working and what’s not.
  • We need someone to track and maintain relationships with various colleagues and allies (such as neighboring organizations, safety teams at the labs, journalist contacts, and so on) and make sure the right resources are being deployed at the right times.

We will be making a hiring announcement soon, with more detail about the comms team’s specific models and plans. We are presently unsure (in part due to funding constraints/budgetary questions!) whether we will be hiring one or two new comms team members, or many more.

Going into 2026, we expect to focus less on producing new content, and more on using our existing library of content to support third parties who are raising the alarm about superintelligence for their own audiences. We also expect to spend more time responding to news developments and taking advantage of opportunities to reach new audiences.

Governance

Our governance strategy primarily involves:

  1. Figuring out solutions, from high-level plans to granular details, for how to effectively halt the development of superintelligence.
  2. Engaging with policymakers, think tanks, and others who are interested in developing and implementing a response to the growing dangers.

There’s a ton of work still to be done. To date, the MIRI Technical Governance Team (TGT) has mainly focused on high-level questions such as “Would it even be possible to monitor AI compute relevant to frontier AI development?” and “What would an international halt to the superintelligence race look like?” We’re only just beginning to transition into more concrete specifics, such as writing up A Tentative Draft of a Treaty, with Annotations, which we published on the book website to coincide with the book release, followed by a draft international agreement.

We plan to push this a lot further, and work towards answering questions like:

  • What, exactly, are the steps that could be taken today, assuming different levels of political will?
  • If there is will for chip monitoring and verification, what are the immediate possible legislative next-steps? What are the tradeoffs between the options?
  • Technologically, what are the immediate possible next steps for, e.g., enabling tamper-proof chip usage verification? What are the exact legislative steps that would require this verification?

We need to extend that earlier work into concrete, tractable, shovel-ready packages that can be handed directly to concerned politicians and leaders (whose ranks grow by the day).

To accelerate this work, MIRI is looking to support and hire individuals with relevant policy experience, writers capable of making dense technical concepts accessible and engaging, and self-motivated and competent researchers.1

We’re also keen to add additional effective spokespeople and ambassadors to the MIRI team, and to free up more hours for those spokespeople who are already proving effective. Thus far, the bulk of our engagement with policymakers and national security professionals has been done either by our CEO (Malo Bourgon), our President (Nate Soares), or the TGT researchers themselves. That work is paying dividends, but there’s room for a larger team to do much, much more.

In our conversations to date, we’ve already heard that folks in government and at think tanks are finding TGT’s write-ups insightful and useful, with some calling it top-of-its-class work. TGT’s recent outputs and activities include:

  • In addition to collaborating with Nate, Eliezer, and others to produce the treaty draft, the TGT has further developed this document into a draft international agreement, along with a collection of supplementary posts that expand on various points.
  • The team published a research agenda earlier this year. Much of their work (to date and going forward) falls under this agenda, which is further explored in a number of papers digging into various specifics. TGT has also participated in relevant conferences and workshops, and has been supervising and mentoring junior researchers through external programs.
  • TGT regularly provides input on RFCs and RFIs from various governmental bodies, and engages with individuals in governments and elsewhere through meetings, briefings, and papers.
  • Current efforts are mostly focused on the U.S. federal government, but not exclusively. For example, in 2024 and 2025, TGT participated in the EU AI Act Code of Practice Working Groups, working to make EU regulations more likely to be relevant to misalignment risks from advanced AI. Just four days ago, Malo was invited to provide testimony to a committee of the Canadian House of Commons; and TGT researcher Aaron Scher was invited to speak to the Scientific Advisory Board of the Secretary-General of the UN on AI verification as part of an expert panel.

The above isn’t an exhaustive description of what everyone at MIRI is doing; e.g., we continue to support a small amount of in-house technical alignment research.

As noted above, we expect to make hiring announcements in the coming weeks and months, outlining the roles we’re hoping to add to the team. But if your interest has already been piqued by the general descriptions above, you’re welcome to reach out to contact@intelligence.org. For more updates, you can subscribe to our newsletter or periodically check our careers pages (MIRI-wide, TGT-specific).


Fundraising

Our goal at MIRI is to have at least two years’ worth of reserves on hand. This enables us to plan more confidently: hire new staff, spin up teams and projects with long time horizons, and balance the need to fundraise with other organizational priorities. Thanks to generous support we received in 2020 and 2021, we didn’t need to run any fundraisers in the last six years.

We expect to hit December 31st having spent approximately $7.1M this year (similar to recent years2), and with $10M in reserves if we raise no additional funds.3

Going into 2026, our budget projections have a median of $8M4, assuming some growth and large projects, with large error bars from uncertainty about the amount of growth and projects. On the upper end of our projections, our expenses would hit upwards of $10M/yr.

Thus, our expected end-of-year reserves puts us $6M shy of our two-year reserve target of $16M.

This year, we received a $1.6M matching grant from the Survival and Flourishing Fund, which means that the first $1.6M we receive in donations before December 31st will be matched 1:1. We will only receive the grant funds if it can be matched by donations.

Therefore, our fundraising target is $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised). This will put us in a good place going into 2026 and 2027, with a modest amount of room to grow.

It’s an ambitious goal and will require a major increase in donor support, but this work strikes us as incredibly high-priority, and the next few years may be an especially important window of opportunity. A great deal has changed in the world over the past few years. We don’t know how many of our past funders will also support our comms and governance efforts, or how many new donors may step in to help. This fundraiser is therefore especially important for informing our future plans.

We also have a stretch target of $10M ($8.4M from donors plus the first $1.6M matched). This would allow us to move much more quickly on pursuing new hires and new projects, embarking on a wide variety of experiments while still maintaining two years of runway.

For more information or assistance on ways to donate, view our Donate page or contact development@intelligence.org.


The default outcome of the development of superintelligence is lethal, but the situation is not hopeless; superintelligence doesn’t exist yet, and humanity has the ability to hit the brakes.

With your support, MIRI can continue fighting the good fight.

Donate Today


  1. In addition to growing our team, we plan to do more mentoring of new talent who might go on to contribute to TGT's research agenda, or who might contribute to the field of technical governance more broadly.
  2. Our yearly expenses in 2019–2024 ranged from $5.4M to $7.7M, with the high point in 2020 (when our team was at its largest), and the low point in 2022 (after scaling back).
  3. It’s worth noting that despite the success of the book, book sales will not be a source of net income for us. As the authors noted prior to the book’s release, “unless the book dramatically exceeds our expectations, we won’t ever see a dime”. From MIRI’s perspective, the core function of the book is to try to raise an alarm and spur the world to action, not to make money; even with the book’s success to date, the costs to produce and promote the book have far exceeded any income.
  4. Our projected expenses are roughly evenly split between Operations, Outreach, and Research, where our communications efforts fall under Outreach and our governance efforts largely fall under Research (with some falling under Outreach). Our median projection breaks down as follows: $2.6M for Operations ($1.3M people costs, $1.2M cost of doing business), $3.2M Outreach ($2M people costs, $1.2M programs), and $2.3M Research ($2.1M people costs, $0.2M programs). This projection includes roughly $0.6–1M in new people costs (full-time-equivalents, i.e., assuming the people are not all hired on January 1st).

    Note that the above is an oversimplified summary; it's useful for high-level takeaways, but for the sake of brevity, I've left out a lot of caveats, details, and explanations.