MIRI May Newsletter: Intelligence Explosion Microeconomics and Other Publications

 |   |  Newsletters

Greetings From the Executive Director

Dear friends,

It’s been a busy month!

Mostly, we’ve been busy publishing things. As you’ll see below, Singularity Hypotheses has now been published, and it includes four chapters by MIRI researchers or research associates. We’ve also published two new technical reports — one on decision theory and another on intelligence explosion microeconomics — and several new blog posts analyzing various issues relating to the future of AI. Finally, we added four older articles to the research page, including Ideal Advisor Theories and Personal CEV (2012).

In our April newsletter we spoke about our April 11th party in San Francisco, celebrating our relaunch as the Machine Intelligence Research Institute and our transition to mathematical research. Additional photos from that event are now available as a Facebook photo album. We’ve also uploaded a video from the event, in which I spend 2 minutes explaining MIRI’s relaunch and some tentative results from the April workshop. After that, visiting researcher Qiaochu Yuan spends 4 minutes explaining one of MIRI’s core research questions: the Löbian obstacle to self-modifying systems.

Some of the research from our April workshop will be published in June, so if you’d like to read about those results right away, you might like to subscribe to our blog.

Cheers!

Luke Muehlhauser

Executive Director

Read more »

New Transcript: Yudkowsky and Aaronson

 |   |  News

Yudkowsky-Aaronson

In When Will AI Be Created?, I referred to a bloggingheads.tv conversation between Eliezer Yudkowsky and Scott Aaronson. A transcript of that dialogue is now available, thanks to MIRI volunteers Ethan Dickinson, Daniel Kokotajlo, and Rick Schwall.

See also the transcript for a bloggingheads.tv conversation between Eliezer Yudkowsky and Massimo Pigliucci.

To join these volunteers in assisting our cause, visit MIRIvolunteers.org!

Sign up for DAGGRE to improve science & technology forecasting

 |   |  News

In When Will AI Be Created?, I named four methods that might improve our forecasts of AI and other important technologies. Two of these methods were explicit quantification and leveraging aggregation, as exemplified by IARPA’s ACE program, which aims to “dramatically enhance the accuracy, precision, and timeliness of… forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many… analysts.”

GMU’s DAGGRE program, one of five teams participating in ACE, recently announced a transition from geopolitical forecasting to science & technology forecasting:

DAGGRE will continue, but it will transition from geo-political forecasting to science and technology (S&T) forecasting to better use its combinatorial capabilities. We will have a brand new shiny, friendly and informative interface co-designed by Inkling Markets, opportunities for you to provide your own forecasting questions and more!

Another exciting development is that our S&T forecasting prediction market will be open to everyone in the world who is at least eighteen years of age. We’re going global!

If you want to help improve humanity’s ability to forecast important technological developments like AI, please register for DAGGRE’s new S&T prediction website here.

I did.

Four Articles Added to Research Page

 |   |  Papers

Four older articles have been added to our research page.

The first is the early draft of Christiano et al.’s “Definability of ‘Truth’ in Probabilistic Logic” previously discussed here and here. The draft was last updated on April 2, 2013.

The second paper is a cleaned-up version of an article originally published in December 2012 by Luke Muehlhauser and Chris Williamson to Less Wrong: “in December 2012 by Luke Muehlhauser and Chris Williamson to Less Wrong: “Ideal Advisor Theories and Personal CEV.”

The third and fourth papers were originally published by Bill Hibbard in the AGI 2012 Conference Proceedings: “AGI 2012 Conference Proceedings: “Avoiding Unintended AI Behaviors” and “Decision Support for Safe AI Design.” Hibbard wrote these articles before he became a MIRI research associate, but he gave us permission to include them on our research page because (1) he became a MIRI research associate during the AGI-12 conference at which the articles were published, (2) the articles were partly inspired by a public dialogue with Luke Muehlhauser, and (3) the articles build on MIRI’s paper “public dialogue with Luke Muehlhauser, and (3) the articles build on MIRI’s paper “Intelligence Explosion and Machine Ethics.”

As mentioned in our December 2012 newsletter, “Avoiding Unintended AI Behaviors” was awarded MIRI’s $1000 Turing Prize for Best AGI Safety Paper. The prize was awarded in honor of Alan Turing, who not only discovered some of the key ideas of machine intelligence, but also grasped its importance, writing that “…it seems probable that once [human-level machine thinking] has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…”

When Will AI Be Created?

 |   |  Analysis

Strong AI appears to be the topic of the week. Kevin Drum at Mother Jones thinks AIs will be as smart as humans by 2040. Karl Smith at Forbes and “M.S.” at The Economist seem to roughly concur with Drum on this timeline. Moshe Vardi, the editor-in-chief of the world’s most-read computer science magazine, predicts that “by 2045 machines will be able to do if not any work that humans can do, then a very significant fraction of the work that humans can do.”

But predicting AI is more difficult than many people think.

To explore these difficulties, let’s start with a 2009 bloggingheads.tv conversation between MIRI researcher Eliezer Yudkowsky and MIT computer scientist Scott Aaronson, author of the excellent Quantum Computing Since Democritus. Early in that dialogue, Yudkowsky asked:

It seems pretty obvious to me that at some point in [one to ten decades] we’re going to build an AI smart enough to improve itself, and [it will] “foom” upward in intelligence, and by the time it exhausts available avenues for improvement it will be a “superintelligence” [relative] to us. Do you feel this is obvious?

Aaronson replied:

The idea that we could build computers that are smarter than us… and that those computers could build still smarter computers… until we reach the physical limits of what kind of intelligence is possible… that we could build things that are to us as we are to ants — all of this is compatible with the laws of physics… and I can’t find a reason of principle that it couldn’t eventually come to pass…

The main thing we disagree about is the time scale… a few thousand years [before AI] seems more reasonable to me.

Those two estimates — several decades vs. “a few thousand years” — have wildly different policy implications.

If there’s a good chance that AI will replace humans at the steering wheel of history in the next several decades, then we’d better put our gloves on and get to work making sure that this event has a positive rather than negative impact. But if we can be pretty confident that AI is thousands of years away, then we needn’t worry about AI for now, and we should focus on other global priorities. Thus it appears that “When will AI be created?” is a question with high value of information for our species.

Let’s take a moment to review the forecasting work that has been done, and see what conclusions we might draw about when AI will likely be created.

Read more »

Advise MIRI with Your Domain-Specific Expertise

 |   |  News

MIRI currently has a few dozen volunteer advisors on a wide range of subjects, but we need more! If you’d like to help MIRI pursue its mission more efficiently, please sign up to be a MIRI advisor.

If you sign up, we will occasionally ask you questions, or send you early drafts of upcoming writings for feedback.

We don’t always want technical advice (“Well, you can do that with a relativized arithmetical hierarchy…”); often, we just want to understand how different groups of experts respond to our writing (“The tone of this paragraph rubs me the wrong way because…”).

At the moment, we are most in need of advisors on the following subjects:

Even if you don’t have much time to help, please sign up! We will of course respect your own limits on availability.

Five theses, two lemmas, and a couple of strategic implications

 |   |  Analysis

MIRI’s primary concern about self-improving AI isn’t so much that it might be created by ‘bad’ actors rather than ‘good’ actors in the global sphere; rather most of our concern is in remedying the situation in which no one knows at all how to create a self-modifying AI with known, stable preferences.  (This is why we see the main problem in terms of doing research and encouraging others to perform relevant research, rather than trying to stop ‘bad’ actors from creating AI.)

This, and a number of other basic strategic views, can be summed up as a consequence of 5 theses about purely factual questions about AI, and 2 lemmas we think are implied by them, as follows:

Intelligence explosion thesis. A sufficiently smart AI will be able to realize large, reinvestable cognitive returns from things it can do on a short timescale, like improving its own cognitive algorithms or purchasing/stealing lots of server time. The intelligence explosion will hit very high levels of intelligence before it runs out of things it can do on a short timescale. See: Chalmers (2010); Muehlhauser & Salamon (2013); Yudkowsky (2013).

Orthogonality thesis. Mind design space is huge enough to contain agents with almost any set of preferences, and such agents can be instrumentally rational about achieving those preferences, and have great computational power. For example, mind design space theoretically contains powerful, instrumentally rational agents which act as expected paperclip maximizers and always consequentialistically choose the option which leads to the greatest number of expected paperclips. See: Bostrom (2012)Armstrong (2013).

Convergent instrumental goals thesis. Most utility functions will generate a subset of instrumental goals which follow from most possible final goals. For example, if you want to build a galaxy full of happy sentient beings, you will need matter and energy, and the same is also true if you want to make paperclips. This thesis is why we’re worried about very powerful entities even if they have no explicit dislike of us: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.” Note though that by the Orthogonality Thesis you can always have an agent which explicitly, terminally prefers not to do any particular thing — an AI which does love you will not want to break you apart for spare atoms. See: Omohundro (2008); Bostrom (2012).

Complexity of value thesis. It takes a large chunk of Kolmogorov complexity to describe even idealized human preferences. That is, what we ‘should’ do  is a computationally complex mathematical object even after we take the limit of reflective equilibrium (judging your own thought processes) and other standard normative theories. A superintelligence with a randomly generated utility function would not do anything we see as worthwhile with the galaxy, because it is unlikely to accidentally hit on final preferences for having a diverse civilization of sentient beings leading interesting lives. See: Yudkowsky (2011); Muehlhauser & Helm (2013).

Fragility of value thesis. Getting a goal system 90% right does not give you 90% of the value, any more than correctly dialing 9 out of 10 digits of my phone number will connect you to somebody who’s 90% similar to Eliezer Yudkowsky. There are multiple dimensions for which eliminating that dimension of value would eliminate almost all value from the future. For example an alien species which shared almost all of human value except that their parameter setting for “boredom” was much lower, might devote most of their computational power to replaying a single peak, optimal experience over and over again with slightly different pixel colors (or the equivalent thereof). Friendly AI is more like a satisficing threshold than something where we’re trying to eke out successive 10% improvements. See: Yudkowsky (2009, 2011).

These five theses seem to imply two important lemmas:

Indirect normativity. Programming a self-improving machine intelligence to implement a grab-bag of things-that-seem-like-good-ideas will lead to a bad outcome, regardless of how good the apple pie and motherhood sounded. E.g., if you give the AI a final goal to “make people happy” it’ll just turn people’s pleasure centers up to maximum. “Indirectly normative” is Bostrom’s term for an AI that calculates the ‘right’ thing to do via, e.g., looking at human beings and modeling their decision processes and idealizing those decision processes (e.g. what you would-want if you knew everything the AI knew and understood your own decision processes, reflective equilibria, ideal advisior theories, and so on), rather than being told a direct set of ‘good ideas’ by the programmers. Indirect normativity is how you deal with Complexity and Fragility. If you can succeed at indirect normativity, then small variances in essentially good intentions may not matter much — that is, if two different projects do indirect normativity correctly, but one project has 20% nicer and kinder researchers, we could still hope that the end results would be of around equal expected value. See: Muehlhauser & Helm (2013).

Large bounded extra difficulty of Friendliness. You can build a Friendly AI (by the Orthogonality Thesis), but you need a lot of work and cleverness to get the goal system right. Probably more importantly, the rest of the AI needs to meet a higher standard of cleanness in order for the goal system to remain invariant through a billion sequential self-modifications. Any sufficiently smart AI to do clean self-modification will tend to do so regardless, but the problem is that intelligence explosion might get started with AIs substantially less smart than that — for example, with AIs that rewrite themselves using genetic algorithms or other such means that don’t preserve a set of consequentialist preferences. In this case, building a Friendly AI could mean that our AI has to be smarter about self-modification than the minimal AI that could undergo an intelligence explosion. See: Yudkowsky (2008) and Yudkowsky (2013).

These lemmas in turn have two major strategic implications:

  1. We have a lot of work to do on things like indirect normativity and stable self-improvement. At this stage a lot of this work looks really foundational — that is, we can’t describe how to do these things using infinite computing power, let alone finite computing power.  We should get started on this work as early as possible, since basic research often takes a lot of time.
  2. There needs to be a Friendly AI project that has some sort of boost over competing projects which don’t live up to a (very) high standard of Friendly AI work — a project which can successfully build a stable-goal-system self-improving AI, before a less-well-funded project hacks together a much sloppier self-improving AI.  Giant supercomputers may be less important to this than being able to bring together the smartest researchers (see the open question posed in Yudkowsky 2013) but the required advantage cannot be left up to chance.  Leaving things to default means that projects less careful about self-modification would have an advantage greater than casual altruism is likely to overcome.

AGI Impact Experts and Friendly AI Experts

 |   |  Analysis

MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.” A central strategy for achieving this mission is to find and train what one might call “AGI impact experts” and “Friendly AI experts.”

AGI impact experts develop skills related to predicting technological development (e.g. building computational models of AI development or reasoning about intelligence explosion microeconomics), predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI. For overviews, see Bostrom & Yudkowsky (2013); Muehlhauser & Salamon (2013).

Friendly AI experts develop skills useful for the development of mathematical architectures that can enable AGIs to be trustworthy (or “human-friendly”). This work is carried out at MIRI research workshops and in various publications, e.g. Christiano et al. (2013); Hibbard (2013). Note that the term “Friendly AI” was selected (in part) to avoid the suggestion that we understand the subject very well — a phrase like “Ethical AI” might sound like the kind of thing one can learn a lot about by looking it up in an encyclopedia, but our present understanding of trustworthy AI is too impoverished for that.

Now, what do we mean by “expert”?

 

Read more »