Communications in Hard Mode

 |   |  Uncategorized

Six months ago, I was a high school English teacher.

I wasn’t looking to change careers, even after nineteen sometimes-difficult years. I was good at it. I enjoyed it. After long experimentation, I had found ways to cut through the nonsense and provide real value to my students. Daily, I met my nemesis, Apathy, in glorious battle, and bested her with growing frequency. I had found my voice.

At MIRI, I’m still struggling to find my voice, for reasons my colleagues have invited me to share later in this post. But my nemesis is the same.

Apathy will be the death of us. Indifference about whether this whole AI thing goes well or ends in disaster. Come-what-may acceptance of whatever awaits us at the other end of the glittering path. Telling ourselves that there’s nothing we can do anyway. Imagining that some adults in the room will take care of the problem, even if we don’t see any such adults.

Perhaps you’ve felt her insidious pull on your psyche. I think we all have. This AI stuff is cool. Giving in to the “thermodynamic god”, to She-Who-Can’t-Be-Bothered, would be so much easier than the alternative, and probably a lot more fun (while it lasted).

And me? I was an English teacher. What could I do? Read more »

MIRI’s 2024 End-of-Year Update

 |   |  MIRI Strategy

MIRI is a nonprofit research organization with a mission of addressing the most serious hazards posed by smarter-than-human artificial intelligence. In our general strategy update and communications strategy update earlier this year, we announced a new strategy that we’re executing on at MIRI, and several new teams we’re spinning up as a result. This post serves as a status update on where things stand at MIRI today.

We originally planned to run a formal fundraiser this year, our first one since 2019. However, we find ourselves with just over two years of reserves, which is more than we’ve typically had in our past fundraisers.

Instead, we’ve written this brief update post, discussing recent developments at MIRI, projects that are in the works, and our basic funding status. In short: We aren’t in urgent need of more funds, but we do have a fair amount of uncertainty about what the funding landscape looks like now that MIRI is pursuing a very different strategy than we have in the past. Donations and expressions of donor interest may be especially useful over the next few months, to help us get a sense of how easy it will be to grow and execute on our new plans.

You can donate via our Donate page, or get in touch with us here. For more details, read on.
Read more »

October 2024 newsletter

 |   |  Newsletters

September 2024 Newsletter

 |   |  Newsletters

July 2024 Newsletter

 |   |  Newsletters

June 2024 Newsletter

 |   |  Newsletters

MIRI 2024 Communications Strategy

 |   |  MIRI Strategy

As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy.

The Objective: Shut it Down1

Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than-human AI systems from destroying humanity. Persuading governments worldwide to take sufficiently drastic action will not be easy, but we believe this is the most viable path.

Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else. We are concerned that most legislation intended to keep humanity alive will go through the usual political processes and be ground down into ineffective compromises.

The only way we think we will get strong enough legislation is if policymakers actually get it, if they actually come to understand that building misaligned smarter-than-human systems will kill everyone, including their children. They will pass strong enough laws and enforce them if and only if they come to understand this central truth.

Meanwhile, the clock is ticking. AI labs continue to invest in developing and training more powerful systems. We do not seem to be close to getting the sweeping legislation we need. So while we lay the groundwork for helping humanity to wake up, we also have a less dramatic request. We ask that governments and AI labs install the “off-switch”2 so that if, on some future day, they decide to shut it all down, they will be able to do so.

We want humanity to wake up and take AI x-risk seriously. We do not want to shift the Overton window, we want to shatter it.
Read more »

May 2024 Newsletter

 |   |  Newsletters