Technical Governance Team
Technical research to inform better AI governance
We are a team at MIRI focused on technical research and analysis in service of AI governance goals to avoid catastrophic and extinction risks, and ensure that humanity successfully navigates the development of smarter-than-human AI.
Recent research
Response to BIS AI Reporting Requirements RFC
We respond to the BIS Request for Comment on the Proposed Rule for Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters.
Comments on US AISI Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1)
We provide comments on the document Managing Misuse Risk for Dual-Use Foundation Models from US AISI. This guidance should emphasize the uncertainty in AI capability evaluations, and use less optimistic language.
Comments on NIST AI RMF: Generative Artificial Intelligence Profile (NIST AI 600-1)
We provide comments on the NIST AI Risk Management Framework profile for generative AI. We suggest including risks from misaligned AI systems, and offer several concrete actions which could be added to the profile.
Technical Governance team mission
AI systems are rapidly becoming more capable, and fundamental safety problems remain unsolved. Our goal is to increase the probability that humanity can safely navigate the transition to a world with smarter-than-human AI, focusing on technical research in service of governance goals.
1. Coordination
Strengthen international coordination to allow for effective international agreements and reduce dangerous race dynamics..
2. Security
Establish robust security standards and practices for frontier AI development, to reduce harms from model misalignment, misuse, and proliferation.
3. Development Safeguards
Ensure that dangerous AI development could be shut down, if there was broad international agreement on the need for this.
4. Regulation
Improve and develop domestic and international regulation of frontier AI, to prepare for coming risks, and identify when current safety plans are likely to fail.
Frequently asked questions
Common questions on AI governance, collaboration, and future risks
The main introductory resource we recommend is our Problem overview, which is also available as a two-page executive summary.
Some other explainers we’ve found useful include:
- Short introductions from MIRI: our TED talk (11m watch) and op-ed in TIME (11m read).
- Longer introductions from MIRI: a video interview with Bloomberg (24m watch), aimed at a more general audience; and AGI Ruin (28m read), aimed at a more technical audience.
- In the Financial Times: “We must slow down the race to God-like AI” (17m read). Some shorter articles include pieces by CNN and the New York Times.
- On Medium: “Preventing Extinction from Superintelligence” (10m read).
At a high level:
- Prioritize communication that scales well. For example, a podcast that gets heard by hundreds of people may be a better use of time than a private one-on-one conversation.
- Keep in mind that building a large coalition is likely to be more useful than building a small but passionate one. For politicians to effectively act on this issue, there will likely need to be broad bipartisan and international support. Partisan polarization and entrenchment would likely be a very bad thing, and the ways individuals talk about AI risk can help shape whether the larger conversation ends up productive and substantive, versus ending up as a partisan tug-of-war.
- Be wary of the fallacy “Something must be done, and this is something, so this must be done.” Think through which actions are actually likely to be the most useful.
- Don’t make things worse. Unethical or manipulative conduct is a lot more likely to backfire than to help your cause.
- Prioritize candor, openness, and honesty in trying to communicate about catastrophic AI risk. Ask questions about what others believe, and prioritize truth-seeking and integrity over winning every local argument. Just because the stakes are high, and time is of the essence, doesn’t mean that deception is more useful than honesty for putting the world in a marginally better position to prevent disaster.
In short: Do the obvious things you’d do if you were having a respectful conversation with people who are new to a topic, people who disagree with you, etc. Make mistakes, learn things, and be bold. Do the things that are likely to actually work, not just the things that sound good on paper. The stakes are too high for anything less than that.
At a high level:
- Prioritize communication that scales well. For example, a podcast that gets heard by hundreds of people may be a better use of time than a private one-on-one conversation.
- Keep in mind that building a large coalition is likely to be more useful than building a small but passionate one. For politicians to effectively act on this issue, there will likely need to be broad bipartisan and international support. Partisan polarization and entrenchment would likely be a very bad thing, and the ways individuals talk about AI risk can help shape whether the larger conversation ends up productive and substantive, versus ending up as a partisan tug-of-war.
- Be wary of the fallacy “Something must be done, and this is something, so this must be done.” Think through which actions are actually likely to be the most useful.
- Don’t make things worse. Unethical or manipulative conduct is a lot more likely to backfire than to help your cause.
- Prioritize candor, openness, and honesty in trying to communicate about catastrophic AI risk. Ask questions about what others believe, and prioritize truth-seeking and integrity over winning every local argument. Just because the stakes are high, and time is of the essence, doesn’t mean that deception is more useful than honesty for putting the world in a marginally better position to prevent disaster.
In short: Do the obvious things you’d do if you were having a respectful conversation with people who are new to a topic, people who disagree with you, etc. Make mistakes, learn things, and be bold. Do the things that are likely to actually work, not just the things that sound good on paper. The stakes are too high for anything less than that.
At a high level:
- Prioritize communication that scales well. For example, a podcast that gets heard by hundreds of people may be a better use of time than a private one-on-one conversation.
- Keep in mind that building a large coalition is likely to be more useful than building a small but passionate one. For politicians to effectively act on this issue, there will likely need to be broad bipartisan and international support. Partisan polarization and entrenchment would likely be a very bad thing, and the ways individuals talk about AI risk can help shape whether the larger conversation ends up productive and substantive, versus ending up as a partisan tug-of-war.
- Be wary of the fallacy “Something must be done, and this is something, so this must be done.” Think through which actions are actually likely to be the most useful.
- Don’t make things worse. Unethical or manipulative conduct is a lot more likely to backfire than to help your cause.
- Prioritize candor, openness, and honesty in trying to communicate about catastrophic AI risk. Ask questions about what others believe, and prioritize truth-seeking and integrity over winning every local argument. Just because the stakes are high, and time is of the essence, doesn’t mean that deception is more useful than honesty for putting the world in a marginally better position to prevent disaster.
In short: Do the obvious things you’d do if you were having a respectful conversation with people who are new to a topic, people who disagree with you, etc. Make mistakes, learn things, and be bold. Do the things that are likely to actually work, not just the things that sound good on paper. The stakes are too high for anything less than that.