MIRI May Newsletter: Intelligence Explosion Microeconomics and Other Publications

 |   |  Newsletters

Greetings From the Executive Director

Dear friends,

It’s been a busy month!

Mostly, we’ve been busy publishing things. As you’ll see below, Singularity Hypotheses has now been published, and it includes four chapters by MIRI researchers or research associates. We’ve also published two new technical reports — one on decision theory and another on intelligence explosion microeconomics — and several new blog posts analyzing various issues relating to the future of AI. Finally, we added four older articles to the research page, including Ideal Advisor Theories and Personal CEV (2012).

In our April newsletter we spoke about our April 11th party in San Francisco, celebrating our relaunch as the Machine Intelligence Research Institute and our transition to mathematical research. Additional photos from that event are now available as a Facebook photo album. We’ve also uploaded a video from the event, in which I spend 2 minutes explaining MIRI’s relaunch and some tentative results from the April workshop. After that, visiting researcher Qiaochu Yuan spends 4 minutes explaining one of MIRI’s core research questions: the Löbian obstacle to self-modifying systems.

Some of the research from our April workshop will be published in June, so if you’d like to read about those results right away, you might like to subscribe to our blog.

Cheers!

Luke Muehlhauser

Executive Director

Intelligence Explosion Microeconomics

Our largest new publication this month is Yudkowsky’s 91-page Intelligence Explosion Microeconomics (discuss here). In this article, Yudkowsky takes some initial steps toward tackling the key quantitative issue in the intelligence explosion, “reinvestable returns on cognitive investments”: what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? The article can be thought of as a compact and hopefully more coherent successor to the AI Foom Debate of 2009, featuring Yudkowsky and GMU economist Robin Hanson.

Here is the abstract:

I. J. Good’s thesis of the ‘intelligence explosion’ is that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version of itself, and that this process could continue enough to vastly exceed human intelligence. As Sandberg (2010) correctly notes, there are several attempts to lay down return-on-investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with I. J. Good’s intelligence explosion thesis as such.

I identify the key issue as returns on cognitive reinvestment — the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued as evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on the sort of debates which then arise on how to interpret such evidence. I propose that the next step forward in analyzing positions on the intelligence explosion would be to formalize return-on-investment curves, so that each stance can say formally which possible microfoundations they hold to be falsified by historical observations already made. More generally, I pose multiple open questions of ‘returns on cognitive reinvestment’ or ‘intelligence explosion microeconomics’. Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting the outcomes for Earth-originating intelligent life.

The dedicated mailing list will be small and restricted to technical discussants: apply for it here.

When Will AI Be Created?

In part, intelligence explosion microeconomics seeks to answer the question “How quickly will human-level AI self-improve to become superintelligent?” Another major question in AI forecasting is, of course: “When will we create human-level AI?”

This is another difficult question, and Luke Muehlhauser surveyed those difficulties in a recent (and quite detailed) blog post: When Will AI Be Created? He concludes:

Given these considerations, I think the most appropriate stance on the question “When will AI be created?” is something like this:

“We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”

How confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.

This statement admits my inability to predict AI, but it also constrains my probability distribution over “years of AI creation” quite a lot.

I think the considerations above justify these constraints on my probability distribution, but I haven’t spelled out my reasoning in great detail. That would require more analysis than I can present here. But I hope I’ve at least summarized the basic considerations on this topic, and those with different probability distributions than mine can now build on my work here to try to justify them.

Muehlhauser also explains four methods for reducing our uncertainty about AI timelines: explicit quantification, leveraging aggregation, signposting the future, and decomposing the phenomena.

As it turns out, you can participate in the first two methods for improving our AI forecasts by signing up for GMU’s DAGGRE program. Muehlhauser himself has signed up.

Muehlhauser also wrote a 400-word piece on the difficulty of AI forecasting for Quartz magazine: Robots will take our jobs, but it’s hard to say when. Here’s a choice quote:

We’ve had the computing power of a honeybee’s brain for quite a while now, but that doesn’t mean we know how to build tiny robots that fend for themselves outside the lab, find their own sources of energy, and communicate with others to build their homes in the wild.

 

Singularity Hypothesis Published

Singularity Hypotheses: A Scientific and Philosophical Assessment has now been published by Springer, in hardcover and ebook forms.

The book contains 20 chapters about the prospect of machine superintelligence, including 4 chapters by MIRI researchers and research associates:

For more details, see the blog post.

Timeless Decision Theory Paper Published

During his time as a research fellow for MIRI, Alex Altair wrote an article on Timeless Decision Theory (TDT) that has now been published: “A Comparison of Decision Algorithms on Newcomblike Problems.”

Altair’s article is both more succinct and also more precise in its formulation of TDT than Yudkowsky’s earlier paper “Timeless Decision Theory.” Thus, Altair’s paper should serve as a handy introduction to TDT for philosophers, computer scientists, and mathematicians, while Yudkowsky’s paper remains required reading for anyone interested to develop TDT further, for it covers more ground than Altair’s paper.

For a gentle introduction to the entire field of normative decision theory (including TDT), see Muehlhauser and Williamson’s Decision Theory FAQ.

AGI Impacts Experts and Friendly AI Experts

In AGI Impacts Experts and Friendly AI Experts, Luke Muehlhauser explains the two types of experts MIRI hopes to cultivate.

AGI impacts experts develop skills related to predicting technological development (e.g. building computational models of AI development or reasoning about intelligence explosion microeconomics), predicting AGI’s likely impacts on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI. For overviews, see Bostrom & Yudkowsky (2013); Muehlhauser & Salamon (2013).

Friendly AI experts develop skills useful for the development of mathematical architectures that can enable AGIs to be trustworthy (or “human-friendly”). This work is carried out at MIRI research workshops and in various publications, e.g. Christiano et al. (2013); Hibbard (2013). Note that the term “Friendly AI” was selected (in part) to avoid the suggestion that we understand the subject very well — a phrase like “Ethical AI” might sound like the kind of thing one can learn a lot about by looking it up in an encyclopedia, but our present understanding of trustworthy AI is too impoverished for that.

For more details on which skills these kinds of experts should develop, read the blog post.

MIRI’s Mission in Five Theses and Two Lemmas

Yudkowsky sums up MIRI’s research mission in the blog post Five theses, two lemmas, and a couple of strategic implications. The five theses are:

  • Intelligence explosion
  • Orthogonality of intelligence and goals
  • Convergent instrumental goals
  • Complexity of value
  • Fragility of value

According to Yudkowsky, these theses imply two important lemmas:

  • Indirect normativity
  • Large bounded extra difficulty of Friendliness

In turn, these two lemmas have two important strategic implications:

  1. We have a lot of work to do on things like indirect normativity and stable self-improvement. At this stage a lot of this work looks really foundational — that is, we can’t describe how to do these things using infinite computing power, let alone finite computing power.  We should get started on this work as early as possible, since basic research often takes a lot of time.
  2. There needs to be a Friendly AI project that has some sort of boost over competing projects which don’t live up to a (very) high standard of Friendly AI work — a project which can successfully build a stable-goal-system self-improving AI, before a less-well-funded project hacks together a much sloppier self-improving AI. Giant supercomputers may be less important to this than being able to bring together the smartest researchers… but the required advantage cannot be left up to chance. Leaving things to default means that projects less careful about self-modification would have an advantage greater than casual altruism is likely to overcome.

For more details on the theses and lemmas, read the blog post and its linked articles.

Our Final Invention available for preorder

James Barrat, a documentary filmmaker for National Geographic, Discovery, PBS, and other broadcasters, has written a wonderful new book about the intelligence explosion called Our Final Invention: Artificial Intelligence and the End of the Human Era. It will be released October 1st, and is available for pre-order on Amazon.

Here are some blurbs from people who have read an advance copy:

“A hard-hitting book about the most important topic of this century and possibly beyond — the issue of whether our species can survive. I wish it was science fiction but I know it’s not.”

—Jaan Tallinn, co-founder of Skype

“The compelling story of humanity’s most critical challenge. A Silent Spring for the 21st Century.”

—Michael Vassar, former MIRI president

Our Final Invention is a thrilling detective story, and also the best book yet written on the most important problem of the 21st century.”

—Luke Muehlhauser, MIRI executive director

“An important and disturbing book.”

—Huw Price, co-founder, Cambridge University Center for the Study of Existential Risk

MIRI Needs Advisors

MIRI currently has a few dozen volunteer advisors on a wide range of subjects, but we need more! We’re especially hoping for additional advisors in mathematical logic, theoretical computer science, artificial intelligence, economics, and game theory.

If you sign up, we will occasionally ask you questions, or send you early drafts of upcoming writings for feedback.

We don’t always want technical advice (“Well, you can do that with a relativized arithmetical hierarchy…”); often, we just want to understand how different groups of experts respond to our writing (“The tone of this paragraph rubs me the wrong way because…”).

Even if you don’t have much time to help, please sign up! We will of course respect your own limits on availability.

Featured Volunteer – Florian Blumm

Florian Bumm has been one of our most active translators. Florian translates materials from English into his native tongue, German. Historically a software engineer, he is now on a traveling vacation in Bolivia progressively extended by remote contract labor, which he has found conducive to his volunteering for MIRI. After leaving a position as a Java engineer for a financial services company, he has decided that he would rather contribute directly to a cause of some sort, and has determined that there is nothing more important than mitigating existential risks from artificial intelligence.

Thanks, Florian!

To join Florian and dozens of other volunteers, visit MIRIvolunteers.org.