Davis on AI capability and motivation

 |   |  Analysis

In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily resist and outsmart the united efforts of eight billion people” and achieve “virtual omnipotence,” and that “though achieving intelligence is more or less easy, giving a computer an ethical point of view is really hard.”

These are all stronger than Bostrom’s actual claims. For example, Bostrom never characterizes building a generally intelligent machine as “easy.” Nor does he say that intelligence can be infinite or that it can produce “omnipotence.” Humans’ intelligence and accumulated knowledge gives us a decisive advantage over chimpanzees, even though our power is limited in important ways. An AI need not be magical or all-powerful in order to have the same kind of decisive advantage over humanity.

Still, Davis’ article is one of the more substantive critiques of MIRI’s core assumptions that I have seen, and he addresses several deep issues that directly bear on AI forecasting and strategy. I’ll sketch out a response to his points here.

Read more »

New annotated bibliography for MIRI’s technical agenda

 |   |  News

annotated bibliographyToday we release a new annotated bibliography accompanying our new technical agenda, written by Nate Soares. If you’d like to discuss the paper, please do so here.


How could superintelligent systems be aligned with the interests of humanity? This annotated bibliography compiles some recent research relevant to that question, and categorizes it into six topics: (1) realistic world models; (2) idealized decision theory; (3) logical uncertainty; (4) Vingean reflection; (5) corrigibility; and (6) value learning. Within each subject area, references are organized in an order amenable to learning the topic. These are by no means the only six topics relevant to the study of alignment, but this annotated bibliography could be used by anyone who wants to understand the state of the art in one of these six particular areas of active research.

Today we’ve also released a page that collects the technical agenda and supporting reports. See our Technical Agenda page.

New mailing list for MIRI math/CS papers only

 |   |  News

As requested, we now offer email notification of new technical (math or computer science) papers and reports from MIRI. Simply subscribe to the mailing list below.

This list sends one email per new technical paper, and contains only the paper’s title, author(s), and abstract, plus a link to the paper.


February 2015 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research Updates

News Updates

Other Updates

  • Top AI scientists and many others have signed an open letter advocating more research into robust and beneficial AI. The letter cites several MIRI papers.
  • Elon Musk has provided $10 million in funding for the types of research described in the open letter. The funding will be distributed in grants by the Future of Life Institute. Apply here.

As always, please don’t hesitate to let us know if you have any questions or comments.

Luke Muehlhauser

Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences



New report: “The value learning problem”

 |   |  News

Value learningToday we release a new technical report by Nate Soares, “The value learning problem.” If you’d like to discuss the paper, please do so here.


A superintelligent machine would not automatically act as intended: it will act as programmed, but the fit between human intentions and formal specification could be poor. We discuss methods by which a system could be constructed to learn what to value. We highlight open problems specific to inductive value learning (from labeled training data), and raise a number of questions about the construction of systems which model the preferences of their operators and act accordingly.

This is the last of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

New report: “Formalizing Two Problems of Realistic World Models”

 |   |  News

Formalizing two problemsToday we release a new technical report by Nate Soares, “Formalizing two problems of realistic world models.” If you’d like to discuss the paper, please do so here.


An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment.

This is the 5th of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”

 |   |  News

Vingean reflectionToday we release a new technical report by Benja Fallenstein and Nate Soares, “Vingean Reflection: Reliable Reasoning for Self-Improving Agents.” If you’d like to discuss the paper, please do so here.


Today, human-level machine intelligence is in the domain of futurism, but there is every reason to expect that it will be developed eventually. Once artificial agents become able to improve themselves further, they may far surpass human intelligence, making it vitally important to ensure that the result of an “intelligence explosion” is aligned with human interests. In this paper, we discuss one aspect of this challenge: ensuring that the initial agent’s reasoning about its future versions is reliable, even if these future versions are far more intelligent than the current reasoner. We refer to reasoning of this sort as Vingean reflection.

A self-improving agent must reason about the behavior of its smarter successors in abstract terms, since if it could predict their actions in detail, it would already be as smart as them. This is called the Vingean principle, and we argue that theoretical work on Vingean reflection should focus on formal models that reflect this principle. However, the framework of expected utility maximization, commonly used to model rational agents, fails to do so. We review a body of work which instead investigates agents that use formal proofs to reason about their successors. While it is unlikely that real-world agents would base their behavior entirely on formal proofs, this appears to be the best currently available formal model of abstract reasoning, and work in this setting may lead to insights applicable to more realistic approaches to Vingean reflection.

This is the 4th of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

An improved “AI Impacts” website

 |   |  News

AI ImpactsRecently, MIRI received a targeted donation to improve the AI Impacts website initially created by frequent MIRI collaborator Paul Christiano and part-time MIRI researcher Katja Grace. Collaborating with Paul and Katja, we ported the old content to a more robust and navigable platform, and made some improvements to the content. You can see the result at AIImpacts.org.

As explained in the site’s introductory blog post,

AI Impacts is premised on two ideas (at least!):

  • The details of the arrival of human-level artificial intelligence matter
    Seven years to prepare is very different from seventy years to prepare. A weeklong transition is very different from a decade-long transition. Brain emulations require different preparations than do synthetic AI minds. Etc.
  • Available data and reasoning can substantially educate our guesses about these details
    We can track progress in AI subfields. We can estimate the hardware represented by the human brain. We can detect the effect of additional labor on software progress. Etc.

Our goal is to assemble relevant evidence and considerations, and to synthesize reasonable views on questions such as when AI will surpass human-level capabilities, how rapid development will be at that point, what advance notice we might expect, and what kinds of AI are likely to reach human-level capabilities first.

The meat of the website is in its articles. Here are two examples to start with:

As featured in:     Business Insider   The Guardian   The Independent   MSNBC   New Statesman