New paper: “Optimal polynomial-time estimators”

|   |  Papers

MIRI Research Associate Vadim Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract:

The concept of an “approximation algorithm” is usually only applied to optimization problems, since in optimization problems the performance of the algorithm on any given input is a continuous parameter. We introduce a new concept of approximation applicable to decision problems and functions, inspired by Bayesian probability. From the perspective of a Bayesian reasoner with limited computational resources, the answer to a problem that cannot be solved exactly is uncertain and therefore should be described by a random variable. It thus should make sense to talk about the expected value of this random variable, an idea we formalize in the language of average-case complexity theory by introducing the concept of “optimal polynomial-time estimators.” We prove some existence theorems and completeness results, and show that optimal polynomial-time estimators exhibit many parallels with “classical” probability theory.

Kosoy’s optimal estimators framework attempts to model general-purpose reasoning under deductive limitations from a different angle than Scott Garrabrant’s logical inductors framework, putting more focus on computational efficiency and tractability.

AI Alignment: Why It’s Hard, and Where to Start

|   |  Analysis, Video

Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is now available on Youtube:

We have an approximately complete transcript of the talk and Q&A session here, slides here, and notes and references here. You may also be interested in a shorter version of this talk I gave at NYU in October, “Fundamental Difficulties in Aligning Advanced AI.”

In the talk, I introduce some open technical problems in AI alignment and discuss the bigger picture into which they fit, as well as what it’s like to work in this relatively new field. Below, I’ve provided an abridged transcript of the talk, with some accompanying slides.

Talk outline:

4.1. Recent topics
4.2. Older work and basics
4.3. Where to start

December 2016 Newsletter

|   |  Newsletters

Post-fundraiser update

|   |  News

We concluded our 2016 fundraiser eleven days ago. Progress was slow at first, but our donors came together in a big way in the final week, nearly doubling our final total. In the end, donors raised $589,316 over six weeks, making this our second-largest fundraiser to date. I’m heartened by this show of support, and extremely grateful to the 247 distinct donors who contributed. We made substantial progress toward our immediate funding goals, but ultimately fell short of our$750,000 target by about $160k. We have a number of hypotheses as to why, but our best guess at the moment is that we missed our target because more donors than expected are waiting until the end of the year to decide whether (and how much) to give. We were experimenting this year with running just one fundraiser in the fall (replacing the summer and winter fundraisers we’ve run in years past) and spending less time over the year on fundraising. Our fundraiser ended up looking more like recent summer funding drives, however. This suggests that either many donors are waiting to give in November and December, or we’re seeing a significant decline in donor support: Looking at our donor database, preliminary data weakly suggests that many traditionally-winter donors are holding off, but it’s still hard to say. This dip in donations so far is offset by the Open Philanthropy Project’s generous$500k grant, which raises our overall 2016 revenue from $1.23M to$1.73M. However, $1.73M would still not be enough to cover our 2016 expenses, much less our expenses for the coming year: (2016 and 2017 expenses are projected, and our 2016 revenue is as of November 11.) To a first approximation, this level of support means that we can continue to move forward without scaling back our plans too much, but only if donors come together to fill what’s left of our$160k gap as the year draws to a close:

$160,000 | |$0

|
$40,000 |$80,000

|
$120,000 |$160,000

We’ve reached our minimum target!

In practical terms, closing this gap will mean that we can likely trial more researchers over the coming year, spend less senior staff time on raising funds, and take on more ambitious outreach and researcher-pipeline projects. E.g., an additional expected $75k / year would likely cause us to trial one extra researcher over the next 18 months (maxing out at 3-5 trials). Currently, we’re in a situation where we have a number of potential researchers that we would like to give a 3-month trial, and we lack the funding to trial all of them. If we don’t close the gap this winter, then it’s also likely that we’ll need to move significantly more slowly on hiring and trialing new researchers going forward. Our main priority in fundraisers is generally to secure stable, long-term flows of funding to pay for researcher salaries — “stable” not necessarily at the level of individual donors, but at least at the level of the donor community at large. If we make up our shortfall in November and December, then this will suggest that we shouldn’t expect big year-to-year fluctuations in support, and therefore we can fairly quickly convert marginal donations into AI safety researchers. If we don’t make up our shortfall soon, then this will suggest that we should be generally more prepared for surprises, which will require building up a bigger runway before growing the team very much. Although we aren’t officially running a fundraiser, we still have quite a bit of ground to cover, and we’ll need support from a lot of new and old donors alike to get the rest of the way to our$750k target. Visit intelligence.org/donate to donate toward this goal, and do spread the word to people who may be interested in supporting our work.

You have my gratitude, again, for helping us get this far. It isn’t clear yet whether we’re out of the woods, but we’re now in a position where success in our 2016 fundraising is definitely a realistic option, provided that we put some work into it over the next two months. Thank you.