The Rocket Alignment Problem

 |   |  Analysis

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.


 

(Somewhere in a not-very-near neighboring world, where science took a very different course…)

 

ALFONSO:  Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent spirits that inhabit the celestial realms so that they turn on their own engineers.

I’m rather skeptical of these speculations. Indeed, I’m a bit skeptical that airplanes will be able to even rise as high as stratospheric weather balloons anytime in the next century. But I understand that your institute wants to address the potential problem of malevolent or dangerous spaceplanes, and that you think this is an important present-day cause.

BETH:  That’s… really not how we at the Mathematics of Intentional Rocketry Institute would phrase things.

The problem of malevolent celestial spirits is what all the news articles are focusing on, but we think the real problem is something entirely different. We’re worried that there’s a difficult, theoretically challenging problem which modern-day rocket punditry is mostly overlooking. We’re worried that if you aim a rocket at where the Moon is in the sky, and press the launch button, the rocket may not actually end up at the Moon.

ALFONSO:  I understand that it’s very important to design fins that can stabilize a spaceplane’s flight in heavy winds. That’s important spaceplane safety research and someone needs to do it.

But if you were working on that sort of safety research, I’d expect you to be collaborating tightly with modern airplane engineers to test out your fin designs, to demonstrate that they are actually useful.

BETH:  Aerodynamic designs are important features of any safe rocket, and we’re quite glad that rocket scientists are working on these problems and taking safety seriously. That’s not the sort of problem that we at MIRI focus on, though.

ALFONSO:  What’s the concern, then? Do you fear that spaceplanes may be developed by ill-intentioned people?

BETH:  That’s not the failure mode we’re worried about right now. We’re more worried that right now, nobody can tell you how to point your rocket’s nose such that it goes to the moon, nor indeed any prespecified celestial destination. Whether Google or the US Government or North Korea is the one to launch the rocket won’t make a pragmatic difference to the probability of a successful Moon landing from our perspective, because right now nobody knows how to aim any kind of rocket anywhere.

Read more »

September 2018 Newsletter

 |   |  Newsletters

Summer MIRI Updates

 |   |  News

In our last major updates—our 2017 strategic update and fundraiser posts—we said that our current focus is on technical research and executing our biggest-ever hiring push. Our supporters responded with an incredible show of support at the end of the year, putting us in an excellent position to execute on our most ambitious growth plans.

In this post, I’d like to provide some updates on our recruiting efforts and successes, announce some major donations and grants that we’ve received, and provide some other miscellaneous updates.

In brief, our major announcements are:

  1. We have two new full-time research staff hires to announce.
  2. We’ve received $1.7 million in major donations and grants, $1 million of which came through a tax-advantaged fund for Canadian MIRI supporters.

For more details, see below.

Read more »

August 2018 Newsletter

 |   |  Newsletters

July 2018 Newsletter

 |   |  Newsletters

New paper: “Forecasting using incomplete models”

 |   |  Papers

Forecasting Using Incomplete ModelsMIRI Research Associate Vanessa Kosoy has a paper out on issues in naturalized induction: “Forecasting using incomplete models”. Abstract:

We consider the task of forecasting an infinite sequence of future observations based on some number of past observations, where the probability measure generating the observations is “suspected” to satisfy one or more of a set of incomplete models, i.e., convex sets in the space of probability measures.

This setting is in some sense intermediate between the realizable setting where the probability measure comes from some known set of probability measures (which can be addressed using e.g. Bayesian inference) and the unrealizable setting where the probability measure is completely arbitrary.

We demonstrate a method of forecasting which guarantees that, whenever the true probability measure satisfies an incomplete model in a given countable set, the forecast converges to the same incomplete model in the (appropriately normalized) Kantorovich-Rubinstein metric. This is analogous to merging of opinions for Bayesian inference, except that convergence in the Kantorovich-Rubinstein metric is weaker than convergence in total variation.

Kosoy’s work builds on logical inductors to create a cleaner (purely learning-theoretic) formalism for modeling complex environments, showing that the methods developed in “Logical induction” are useful for applications in classical sequence prediction unrelated to logic.

“Forecasting using incomplete models” also shows that the intuitive concept of an “incomplete” or “partial” model has an elegant and useful formalization related to Knightian uncertainty. Additionally, Kosoy shows that using incomplete models to generalize Bayesian inference allows an agent to make predictions about environments that can be as complex as the agent itself, or more complex — as contrasted with classical Bayesian inference.

For more of Kosoy’s research, see “Optimal polynomial-time estimators” and the Intelligent Agent Foundations Forum.
 

Sign up to get updates on new MIRI technical results

Get notified every time a new technical paper is published.

 

June 2018 Newsletter

 |   |  Newsletters

May 2018 Newsletter

 |   |  Newsletters