All Posts
    “If Anyone Builds It, Everyone Dies” release day! September 16, 2025
						    A Note on AI for Medicine and Biotech September 15, 2025
						    MIRI Newsletter #123 July 3, 2025
						    IABIED: Advertisement design competition July 1, 2025
						    A case for courage, when speaking of AI danger June 26, 2025
						    So You Want to Work at a Frontier AI Lab June 11, 2025
						    Thoughts on AI 2027 April 9, 2025
						    MIRI Communications Team 2024 Recap April 3, 2025
						    Takeover Not Required March 14, 2025
						    MIRI Newsletter #121 February 6, 2025
						    Communications in Hard Mode December 13, 2024
						    MIRI’s 2024 End-of-Year Update December 2, 2024
						    October 2024 Newsletter October 29, 2024
						    September 2024 Newsletter September 16, 2024
						    July 2024 Newsletter July 10, 2024
						    June 2024 Newsletter June 14, 2024
						    MIRI 2024 Communications Strategy May 29, 2024
						    May 2024 Newsletter May 14, 2024
						    April 2024 Newsletter April 12, 2024
						    MIRI 2024 Mission and Strategy Update January 4, 2024
						    Written statement of MIRI CEO Malo Bourgon to the AI Insight Forum December 6, 2023
						    AI as a science, and three obstacles to alignment strategies October 30, 2023
						    Announcing MIRI’s new CEO and leadership team October 10, 2023
						    The basic reasons I expect AGI ruin April 21, 2023
						    Misgeneralization as a misnomer April 10, 2023
						    Deep Deceptiveness March 21, 2023
						    Yudkowsky on AGI risk on the Bankless podcast March 14, 2023
						    July 2022 Newsletter July 30, 2022
						    AGI Ruin: A List of Lethalities June 10, 2022
						    Shah and Yudkowsky on alignment failures March 2, 2022
						    January 2022 Newsletter January 31, 2022
						    December 2021 Newsletter December 31, 2021
						    Ngo’s view on alignment difficulty December 14, 2021
						    Conversation on technology forecasting and gradualism December 9, 2021
						    More Christiano, Cotra, and Yudkowsky on AI progress December 6, 2021
						    Shulman and Yudkowsky on AI progress December 4, 2021
						    Biology-Inspired AGI Timelines: The Trick That Never Works December 3, 2021
						    Visible Thoughts Project and Bounty Announcement November 29, 2021
						    Christiano, Cotra, and Yudkowsky on AI progress November 25, 2021
						    Yudkowsky and Christiano discuss “Takeoff Speeds” November 22, 2021
						    Ngo and Yudkowsky on AI capability gains November 18, 2021
						    Ngo and Yudkowsky on alignment difficulty November 15, 2021
						    Discussion with Eliezer Yudkowsky on AGI interventions November 11, 2021
						    November 2021 Newsletter November 6, 2021
						    October 2021 Newsletter October 7, 2021
						    September 2021 Newsletter September 29, 2021
						    August 2021 Newsletter August 31, 2021
						    July 2021 Newsletter August 3, 2021
						    June 2021 Newsletter July 1, 2021
						    Finite Factored Sets May 23, 2021
						    May 2021 Newsletter May 18, 2021
						    April 2021 Newsletter May 2, 2021
						    March 2021 Newsletter April 1, 2021
						    February 2021 Newsletter March 2, 2021
						    January 2021 Newsletter January 27, 2021
						    December 2020 Newsletter December 30, 2020
						    2020 Updates and Strategy December 21, 2020
						    November 2020 Newsletter November 30, 2020
						    October 2020 Newsletter October 23, 2020
						    September 2020 Newsletter September 10, 2020
						    August 2020 Newsletter August 13, 2020
						    July 2020 Newsletter July 8, 2020
						    June 2020 Newsletter June 8, 2020
						    May 2020 Newsletter May 29, 2020
						    April 2020 Newsletter May 1, 2020
						    MIRI’s largest grant to date! April 27, 2020
						    March 2020 Newsletter April 1, 2020
						    February 2020 Newsletter February 23, 2020
						    Our 2019 Fundraiser Review February 13, 2020
						    January 2020 Newsletter January 15, 2020
						    December 2019 Newsletter December 5, 2019
						    MIRI’s 2019 Fundraiser December 2, 2019
						    Giving Tuesday 2019 November 28, 2019
						    November 2019 Newsletter November 25, 2019
						    October 2019 Newsletter October 25, 2019
						    September 2019 Newsletter September 30, 2019
						    August 2019 Newsletter August 6, 2019
						    July 2019 Newsletter July 19, 2019
						    New paper: “Risks from learned optimization” June 7, 2019
						    June 2019 Newsletter June 1, 2019
						    2018 in review May 31, 2019
						    May 2019 Newsletter May 10, 2019
						    New paper: “Delegative reinforcement learning” April 24, 2019
						    April 2019 Newsletter April 21, 2019
						    New grants from the Open Philanthropy Project and BERI April 1, 2019
						    March 2019 Newsletter March 14, 2019
						    Applications are open for the MIRI Summer Fellows Program! March 10, 2019
						    A new field guide for MIRIx March 9, 2019
						    February 2019 Newsletter February 25, 2019
						    Thoughts on Human Models February 22, 2019
						    Our 2018 Fundraiser Review February 11, 2019
						    January 2019 Newsletter January 31, 2019
						    December 2018 Newsletter December 16, 2018
						    Announcing a new edition of “Rationality: From AI to Zombies” December 15, 2018
						    2017 in review November 28, 2018
						    November 2018 Newsletter November 26, 2018
						    2018 Update: Our New Research Directions November 22, 2018
						    Embedded Curiosities November 8, 2018
						    Subsystem Alignment November 6, 2018
						    Robust Delegation November 4, 2018
						    Embedded World-Models November 2, 2018
						    Decision Theory October 31, 2018
						    October 2018 Newsletter October 29, 2018
						    The Rocket Alignment Problem October 3, 2018
						    September 2018 Newsletter September 30, 2018
						    Summer MIRI Updates September 1, 2018
						    August 2018 Newsletter August 27, 2018
						    July 2018 Newsletter July 25, 2018
						    New paper: “Forecasting using incomplete models” June 27, 2018
						    June 2018 Newsletter June 23, 2018
						    May 2018 Newsletter May 31, 2018
						    April 2018 Newsletter April 10, 2018
						    2018 research plans and predictions March 31, 2018
						    New paper: “Categorizing variants of Goodhart’s Law” March 27, 2018
						    March 2018 Newsletter March 25, 2018
						    Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink” February 28, 2018
						    February 2018 Newsletter February 25, 2018
						    January 2018 Newsletter January 28, 2018
						    Fundraising success! January 10, 2018
						    End-of-the-year matching challenge! December 14, 2017
						    ML Living Library Opening December 12, 2017
						    A reply to Francois Chollet on intelligence explosion December 6, 2017
						    MIRI’s 2017 Fundraiser December 1, 2017
						    Security Mindset and the Logistic Success Curve November 26, 2017
						    Security Mindset and Ordinary Paranoia November 25, 2017
						    Announcing “Inadequate Equilibria” November 16, 2017
						    A major grant from the Open Philanthropy Project November 8, 2017
						    November 2017 Newsletter November 3, 2017
						    New paper: “Functional Decision Theory” October 22, 2017
						    AlphaGo Zero and the Foom Debate October 20, 2017
						    October 2017 Newsletter October 16, 2017
						    There’s No Fire Alarm for Artificial General Intelligence October 13, 2017
						    September 2017 Newsletter September 24, 2017
						    New paper: “Incorrigibility in the CIRL Framework” August 31, 2017
						    August 2017 Newsletter August 16, 2017
						    July 2017 Newsletter July 25, 2017
						    Updates to the research team, and a major donation July 4, 2017
						    June 2017 Newsletter June 16, 2017
						    May 2017 Newsletter May 10, 2017
						    2017 Updates and Strategy April 30, 2017
						    Decisions are for making bad outcomes inconsistent April 7, 2017
						    April 2017 Newsletter April 6, 2017
						    Two new researchers join MIRI March 31, 2017
						    2016 in review March 28, 2017
						    New paper: “Cheating Death in Damascus” March 18, 2017
						    March 2017 Newsletter March 15, 2017
						    Using machine learning to address AI risk February 28, 2017
						    February 2017 Newsletter February 16, 2017
						    CHCAI/MIRI research internship in AI safety February 11, 2017
						    New paper: “Toward negotiable reinforcement learning” January 25, 2017
						    Response to Cegłowski on superintelligence January 13, 2017
						    January 2017 Newsletter January 4, 2017
						    New paper: “Optimal polynomial-time estimators” December 31, 2016
						    AI Alignment: Why It’s Hard, and Where to Start December 28, 2016
						    December 2016 Newsletter December 13, 2016
						    November 2016 Newsletter November 20, 2016
						    Post-fundraiser update November 11, 2016
						    White House submissions and report on AI safety October 20, 2016
						    MIRI AMA, and a talk on logical induction October 11, 2016
						    October 2016 Newsletter October 9, 2016
						    CSRBAI talks on agent models and multi-agent dilemmas October 6, 2016
						    MIRI’s 2016 Fundraiser September 16, 2016
						    New paper: “Logical induction” September 12, 2016
						    Grant announcement from the Open Philanthropy Project September 6, 2016
						    September 2016 Newsletter September 3, 2016
						    CSRBAI talks on preference specification August 30, 2016
						    CSRBAI talks on robustness and error-tolerance August 15, 2016
						    MIRI strategy update: 2016 August 5, 2016
						    August 2016 Newsletter August 3, 2016
						    2016 summer program recap August 2, 2016
						    2015 in review July 29, 2016
						    Submission to the OSTP on AI outcomes July 23, 2016
						    July 2016 Newsletter July 5, 2016
						    June 2016 Newsletter June 12, 2016
						    New paper: “Safely interruptible agents” June 1, 2016
						    May 2016 Newsletter May 13, 2016
						    New papers dividing logical uncertainty into two subproblems April 21, 2016
						    April 2016 Newsletter April 11, 2016
						    MIRI has a new COO: Malo Bourgon March 30, 2016
						    Announcing a new colloquium series and fellows program March 28, 2016
						    March 2016 Newsletter March 5, 2016
						    John Horgan interviews Eliezer Yudkowsky March 2, 2016
						    New paper: “Defining human values for value learners” February 29, 2016
						    February 2016 Newsletter February 6, 2016
						    End-of-the-year fundraiser and grant successes January 12, 2016
						    January 2016 Newsletter January 3, 2016
						    Safety engineering, target selection, and alignment theory December 31, 2015
						    The need to scale MIRI’s methods December 23, 2015
						    Jed McCaleb on Why MIRI Matters December 15, 2015
						    OpenAI and other news December 11, 2015
						    New paper: “Proof-producing reflection for HOL” December 4, 2015
						    December 2015 Newsletter December 3, 2015
						    MIRI’s 2015 Winter Fundraiser! December 1, 2015
						    New paper: “Quantilizers” November 29, 2015
						    New paper: “Formalizing convergent instrumental goals” November 26, 2015
						    November 2015 Newsletter November 3, 2015
						    Edge.org contributors discuss the future of AI November 1, 2015
						    New report: “Leó Szilárd and the Danger of Nuclear Weapons” October 7, 2015
						    October 2015 Newsletter October 3, 2015
						    New paper: “Asymptotic logical uncertainty and the Benford test” September 30, 2015
						    September 2015 Newsletter September 14, 2015
						    Our summer fundraising drive is complete! September 1, 2015
						    Final fundraiser day: Announcing our new team August 31, 2015
						    AI and Effective Altruism August 28, 2015
						    Powerful planners, not sentient software August 18, 2015
						    What Sets MIRI Apart? August 14, 2015
						    Assessing our past and potential impact August 10, 2015
						    Target 3: Taking It To The Next Level August 7, 2015
						    When AI Accelerates AI August 3, 2015
						    August 2015 Newsletter August 2, 2015
						    A new MIRI FAQ, and other announcements July 31, 2015
						    MIRI’s Approach July 27, 2015
						    Four Background Claims July 24, 2015
						    Why Now Matters July 20, 2015
						    Targets 1 and 2: Growing MIRI July 18, 2015
						    MIRI’s 2015 Summer Fundraiser! July 17, 2015
						    An Astounding Year July 16, 2015
						    July 2015 Newsletter July 5, 2015
						    Grants and fundraisers July 1, 2015
						    June 2015 Newsletter June 1, 2015
						    Introductions May 31, 2015
						    Two papers accepted to AGI-15 May 29, 2015
						    A fond farewell and a new Executive Director May 6, 2015
						    May 2015 Newsletter May 1, 2015
						    New papers on reflective oracles and agents April 28, 2015
						    April 2015 newsletter April 1, 2015
						    Recent AI control brainstorming by Stuart Armstrong March 27, 2015
						    2014 in review March 22, 2015
						    Rationality: From AI to Zombies March 12, 2015
						    Bill Hibbard on Ethical Artificial Intelligence March 9, 2015
						    March 2015 newsletter March 1, 2015
						    Davis on AI capability and motivation February 6, 2015
						    New annotated bibliography for MIRI’s technical agenda February 5, 2015
						    New mailing list for MIRI math/CS papers only February 3, 2015
						    February 2015 Newsletter February 1, 2015
						    New report: “The value learning problem” January 29, 2015
						    New report: “Formalizing Two Problems of Realistic World Models” January 22, 2015
						    An improved “AI Impacts” website January 11, 2015
						    New report: “Questions of reasoning under logical uncertainty” January 9, 2015
						    Brooks and Searle on AI volition and timelines January 8, 2015
						    Matthias Troyer on Quantum Computers January 7, 2015
						    January 2015 Newsletter January 1, 2015
						    Our new technical research agenda overview December 23, 2014
						    2014 Winter Matching Challenge Completed! December 18, 2014
						    New report: “Computable probability distributions which converge…” December 16, 2014
						    New paper: “Concept learning for safe autonomous AI” December 5, 2014
						    December newsletter December 1, 2014
						    Three misconceptions in Edge.org’s conversation on “The Myth of AI” November 18, 2014
						    Video of Bostrom’s talk on Superintelligence at UC Berkeley November 6, 2014
						    MIRI’s November Newsletter November 1, 2014
						    The Financial Times story on MIRI October 31, 2014
						    New report: “UDT with known search order” October 30, 2014
						    Singularity2014.com appears to be a fake October 27, 2014
						    New paper: “Corrigibility” October 18, 2014
						    AGI outcomes and civilizational competence October 16, 2014
						    Nate Soares’ talk: “Why ain’t you rich?” October 7, 2014
						    MIRI’s October Newsletter October 1, 2014
						    Kristinn Thórisson on constructivist AI September 14, 2014
						    Nate Soares speaking at Purdue University September 12, 2014
						    Ken Hayworth on brain emulation prospects September 9, 2014
						    Friendly AI Research Help from MIRI September 8, 2014
						    John Fox on AI safety September 4, 2014
						    MIRI’s September Newsletter September 1, 2014
						    Superintelligence reading group August 31, 2014
						    New paper: “Exploratory engineering in artificial intelligence” August 22, 2014
						    2014 Summer Matching Challenge Completed! August 15, 2014
						    MIRI’s recent effective altruism talks August 11, 2014
						    Groundwork for AGI safety engineering August 4, 2014
						    MIRI’s August 2014 newsletter August 1, 2014
						    Scott Frickel on intellectual movements July 28, 2014
						    2014 Summer Matching Challenge! July 21, 2014
						    MIRI’s July 2014 newsletter July 1, 2014
						    Our mid-2014 strategic plan June 11, 2014
						    MIRI’s June 2014 Newsletter June 1, 2014
						    Milind Tambe on game theory in security applications May 30, 2014
						    Lennart Beringer on the Verified Software Toolchain May 27, 2014
						    Johann Schumann on high-assurance systems May 24, 2014
						    Sandor Veres on autonomous agents May 23, 2014
						    Michael Fisher on verifying autonomous systems May 9, 2014
						    Kasper Stoy on self-reconfigurable robots May 2, 2014
						    MIRI’s May 2014 Newsletter May 1, 2014
						    Ruediger Schack on quantum Bayesianism April 29, 2014
						    David J. Atkinson on autonomous systems April 28, 2014
						    Help MIRI in a Massive 24-Hour Fundraiser on May 6th April 25, 2014
						    Dave Doty on algorithmic self-assembly April 23, 2014
						    Suzana Herculano-Houzel on cognitive ability and brain size April 22, 2014
						    Why MIRI? April 20, 2014
						    Thomas Bolander on self-reference and agent introspection April 13, 2014
						    Jonathan Millen on covert channel communication April 12, 2014
						    Wolf Kohn on hybrid systems control April 11, 2014
						    MIRI’s April 2014 Newsletter April 10, 2014
						    Will MacAskill on normative uncertainty April 8, 2014
						    Erik DeBenedictis on supercomputing April 3, 2014
						    2013 in Review: Fundraising April 2, 2014
						    Lyle Ungar on forecasting March 26, 2014
						    Randal Koene on whole brain emulation March 20, 2014
						    Max Tegmark on the mathematical universe March 19, 2014
						    MIRI’s March 2014 Newsletter March 18, 2014
						    Recent Hires at MIRI March 13, 2014
						    Toby Walsh on computational social choice March 10, 2014
						    Randall Larsen and Lynne Kidder on USA bio-response March 9, 2014
						    John Ridgway on safety-critical systems March 8, 2014
						    David Cook on the VV&A process March 7, 2014
						    Robert Constable on correct-by-construction programming March 2, 2014
						    The world’s distribution of computation (initial findings) February 28, 2014
						    Nik Weaver on Paradoxes of Rational Agency February 24, 2014
						    MIRI’s May 2014 Workshop February 22, 2014
						    Conversation with Holden Karnofsky about Future-Oriented Philanthropy February 21, 2014
						    2013 in Review: Friendly AI Research February 18, 2014
						    MIRI’s February 2014 Newsletter February 17, 2014
						    André Platzer on Verifying Cyber-Physical Systems February 15, 2014
						    Gerwin Klein on Formal Methods February 11, 2014
						    2013 in Review: Strategic and Expository Research February 8, 2014
						    MIRI’s Experience with Google Adwords February 6, 2014
						    Careers at MIRI February 3, 2014
						    Robust Cooperation: A Case Study in Friendly AI Research February 1, 2014
						    Two MIRI talks from AGI-11 January 31, 2014
						    Emil Vassev on Formal Verification January 30, 2014
						    How Big is the Field of Artificial Intelligence? (initial findings) January 28, 2014
						    Existential Risk Strategy Conversation with Holden Karnofsky January 27, 2014
						    2013 in Review: Outreach January 20, 2014
						    Want to help MIRI by investing in XRP? January 18, 2014
						    MIRI’s January 2014 Newsletter January 17, 2014
						    MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei January 13, 2014
						    Kathleen Fisher on High-Assurance Systems January 10, 2014
						    Donor Story #1: Noticing Inferential Distance January 5, 2014
						    7 New Technical Reports, and a New Paper December 31, 2013
						    Winter 2013 Fundraiser Completed! December 26, 2013
						    Josef Urban on Machine Learning and Automated Reasoning December 21, 2013
						    2013 in Review: Operations December 20, 2013
						    New Paper: “Why We Need Friendly AI” December 18, 2013
						    MIRI’s December 2013 Newsletter December 16, 2013
						    Scott Aaronson on Philosophical Progress December 13, 2013
						    2013 Winter Matching Challenge December 2, 2013
						    New Paper: “Predicting AGI: What can we say when we know so little?” December 1, 2013
						    New Paper: “Racing to the Precipice” November 27, 2013
						    MIRI’s November 2013 Newsletter November 18, 2013
						    Support MIRI by Shopping at AmazonSmile November 6, 2013
						    Greg Morrisett on Secure and Reliable Systems November 5, 2013
						    From Philosophy to Math to Engineering November 4, 2013
						    Robin Hanson on Serious Futurism November 1, 2013
						    New Paper: “Embryo Selection for Cognitive Enhancement” October 30, 2013
						    Markus Schmidt on Risks from Novel Biotechnologies October 28, 2013
						    Bas Steunebrink on Self-Reflective Programming October 25, 2013
						    Probabilistic Metamathematics and the Definability of Truth October 23, 2013
						    Hadi Esmaeilzadeh on Dark Silicon October 21, 2013
						    Russell and Norvig on Friendly AI October 19, 2013
						    Richard Posner on AI Dangers October 18, 2013
						    MIRI’s October Newsletter October 12, 2013
						    Upcoming Talks at Harvard and MIT October 1, 2013
						    Paul Rosenbloom on Cognitive Architectures September 25, 2013
						    Effective Altruism and Flow-Through Effects September 14, 2013
						    How well will policy-makers handle AGI? (initial findings) September 12, 2013
						    MIRI’s September Newsletter September 10, 2013
						    Laurent Orseau on Artificial General Intelligence September 6, 2013
						    Five Theses, Using Only Simple Words September 5, 2013
						    How effectively can we plan for future decades? (initial findings) September 4, 2013
						    Stephen Hsu on Cognitive Genomics August 31, 2013
						    MIRI’s November 2013 Workshop in Oxford August 30, 2013
						    Transparency in Safety-Critical Systems August 25, 2013
						    2013 Summer Matching Challenge Completed! August 21, 2013
						    Luke at Quixey on Tuesday (Aug. 20th) August 16, 2013
						    August Newsletter: New Research and Expert Interviews August 13, 2013
						    What is AGI? August 11, 2013
						    “Algorithmic Progress in Six Domains” Released August 2, 2013
						    AI Risk and the Security Mindset July 31, 2013
						    Index of Transcripts July 25, 2013
						    MIRI’s December 2013 Workshop July 24, 2013
						    Nick Beckstead on the Importance of the Far Future July 17, 2013
						    Roman Yampolskiy on AI Safety Engineering July 15, 2013
						    James Miller on Unusual Incentives Facing AGI Companies July 12, 2013
						    MIRI’s July Newsletter: Fundraiser and New Papers July 11, 2013
						    2013 Summer Matching Challenge! July 8, 2013
						    What is Intelligence? June 19, 2013
						    MIRI’s July 2013 Workshop June 7, 2013
						    New Research Page and Two New Articles June 6, 2013
						    Friendly AI Research as Effective Altruism June 5, 2013
						    New Transcript: Yudkowsky and Aaronson May 29, 2013
						    When Will AI Be Created? May 15, 2013
						    AGI Impact Experts and Friendly AI Experts May 1, 2013
						    “Intelligence Explosion Microeconomics” Released April 29, 2013
						    “Singularity Hypotheses” Published April 25, 2013
						    Altair’s Timeless Decision Theory Paper Published April 19, 2013
						    MIRI’s Strategy for 2013 April 13, 2013
						    The Lean Nonprofit April 4, 2013
						    Early draft of naturalistic reflection paper March 22, 2013
						    March Newsletter March 7, 2013
						    Welcome to Intelligence.org February 28, 2013
						    We are now the “Machine Intelligence Research Institute” (MIRI) January 30, 2013
						    2012 Winter Matching Challenge a Success! January 20, 2013
						    December 2012 Newsletter December 19, 2012
						    2012 Winter Matching Challenge! December 6, 2012
						    November 2012 Newsletter November 7, 2012
						    September 2012 Newsletter September 21, 2012
						    August 2012 Newsletter August 21, 2012
						    July 2012 Newsletter August 6, 2012
						    2012 Summer Singularity Challenge Success! July 30, 2012
						    2012 Summer Singularity Challenge July 3, 2012
						    2011-2012 Winter Fundraiser Completed February 20, 2012
						    2011 Machine Intelligence Research Institute Winter Fundraiser December 27, 2011
						    Interview with New MIRI Research Fellow Luke Muehlhauser September 15, 2011
						    2011 Summer Matching Challenge Success! September 1, 2011
						    Machine Intelligence Research Institute Strategic Plan 2011 August 26, 2011
						    New Intelligence Explosion Website August 7, 2011
						    Announcing the $125,000 Summer Singularity Challenge July 22, 2011
						    Tallinn-Evans Challenge Grant Success! January 20, 2011
						    Announcing the Tallinn-Evans $125,000 Singularity Challenge December 21, 2010
						    2010 Singularity Research Challenge Fulfilled! March 1, 2010
						    Announcing the 2010 Singularity Research Challenge December 23, 2009
						    Introducing Myself February 16, 2009
						    Three Major Singularity Schools September 30, 2007
						    The Power of Intelligence July 10, 2007
						Browse
				Browse
				Categories
Subscribe
				Follow us on