New annotated bibliography for MIRI’s technical agenda

 |   |  News

annotated bibliographyToday we release a new annotated bibliography accompanying our new technical agenda, written by Nate Soares. If you’d like to discuss the paper, please do so here.

Abstract:

How could superintelligent systems be aligned with the interests of humanity? This annotated bibliography compiles some recent research relevant to that question, and categorizes it into six topics: (1) realistic world models; (2) idealized decision theory; (3) logical uncertainty; (4) Vingean reflection; (5) corrigibility; and (6) value learning. Within each subject area, references are organized in an order amenable to learning the topic. These are by no means the only six topics relevant to the study of alignment, but this annotated bibliography could be used by anyone who wants to understand the state of the art in one of these six particular areas of active research.

Today we’ve also released a page that collects the technical agenda and supporting reports. See our Technical Agenda page.

New mailing list for MIRI math/CS papers only

 |   |  News

As requested, we now offer email notification of new technical (math or computer science) papers and reports from MIRI. Simply subscribe to the mailing list below.

This list sends one email per new technical paper, and contains only the paper’s title, author(s), and abstract, plus a link to the paper.

Sign up to get updates on new MIRI technical results

Get notified every time a new technical paper is published.

February 2015 Newsletter

 |   |  Newsletters


Machine Intelligence Research Institute

Research Updates

News Updates

Other Updates

  • Top AI scientists and many others have signed an open letter advocating more research into robust and beneficial AI. The letter cites several MIRI papers.
  • Elon Musk has provided $10 million in funding for the types of research described in the open letter. The funding will be distributed in grants by the Future of Life Institute. Apply here.

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser

Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences

 

 

New report: “The value learning problem”

 |   |  Papers

Value learningToday we release a new technical report by Nate Soares, “The value learning problem.” If you’d like to discuss the paper, please do so here.

Abstract:

A superintelligent machine would not automatically act as intended: it will act as programmed, but the fit between human intentions and formal specification could be poor. We discuss methods by which a system could be constructed to learn what to value. We highlight open problems specific to inductive value learning (from labeled training data), and raise a number of questions about the construction of systems which model the preferences of their operators and act accordingly.

This is the last of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

Update May 29, 2016: A revised version of “The Value Learning Problem” (available at the original link) has been accepted to the IJCAI-16 Ethics for Artificial Intelligence workshop. The original version of the paper can be found here.

New report: “Formalizing Two Problems of Realistic World Models”

 |   |  Papers

Formalizing two problemsToday we release a new technical report by Nate Soares, “Formalizing two problems of realistic world models.” If you’d like to discuss the paper, please do so here.

Abstract:

An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment.

This is the 5th of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”

 |   |  Papers

Vingean reflectionToday we release a new technical report by Benja Fallenstein and Nate Soares, “Vingean Reflection: Reliable Reasoning for Self-Improving Agents.” If you’d like to discuss the paper, please do so here.

Abstract:

Today, human-level machine intelligence is in the domain of futurism, but there is every reason to expect that it will be developed eventually. Once artificial agents become able to improve themselves further, they may far surpass human intelligence, making it vitally important to ensure that the result of an “intelligence explosion” is aligned with human interests. In this paper, we discuss one aspect of this challenge: ensuring that the initial agent’s reasoning about its future versions is reliable, even if these future versions are far more intelligent than the current reasoner. We refer to reasoning of this sort as Vingean reflection.

A self-improving agent must reason about the behavior of its smarter successors in abstract terms, since if it could predict their actions in detail, it would already be as smart as them. This is called the Vingean principle, and we argue that theoretical work on Vingean reflection should focus on formal models that reflect this principle. However, the framework of expected utility maximization, commonly used to model rational agents, fails to do so. We review a body of work which instead investigates agents that use formal proofs to reason about their successors. While it is unlikely that real-world agents would base their behavior entirely on formal proofs, this appears to be the best currently available formal model of abstract reasoning, and work in this setting may lead to insights applicable to more realistic approaches to Vingean reflection.

This is the 4th of six new major reports which describe and motivate MIRI’s current research agenda at a high level.

An improved “AI Impacts” website

 |   |  News

AI ImpactsRecently, MIRI received a targeted donation to improve the AI Impacts website initially created by frequent MIRI collaborator Paul Christiano and part-time MIRI researcher Katja Grace. Collaborating with Paul and Katja, we ported the old content to a more robust and navigable platform, and made some improvements to the content. You can see the result at AIImpacts.org.

As explained in the site’s introductory blog post,

AI Impacts is premised on two ideas (at least!):

  • The details of the arrival of human-level artificial intelligence matter
    Seven years to prepare is very different from seventy years to prepare. A weeklong transition is very different from a decade-long transition. Brain emulations require different preparations than do synthetic AI minds. Etc.
  • Available data and reasoning can substantially educate our guesses about these details
    We can track progress in AI subfields. We can estimate the hardware represented by the human brain. We can detect the effect of additional labor on software progress. Etc.

Our goal is to assemble relevant evidence and considerations, and to synthesize reasonable views on questions such as when AI will surpass human-level capabilities, how rapid development will be at that point, what advance notice we might expect, and what kinds of AI are likely to reach human-level capabilities first.

The meat of the website is in its articles. Here are two examples to start with:

New report: “Questions of reasoning under logical uncertainty”

 |   |  Papers

Reasoning under LUToday we release a new technical report by Nate Soares and Benja Fallenstein, “Questions of reasoning under logical uncertainty.” If you’d like to discuss the paper, please do so here.

Abstract:

A logically uncertain reasoner would be able to reason as if they know both a programming language and a program, without knowing what the program outputs. Most practical reasoning involves some logical uncertainty, but no satisfactory theory of reasoning under logical uncertainty yet exists. A better theory of reasoning under logical uncertainty is needed in order to develop the tools necessary to construct highly reliable artificial reasoners. This paper introduces the topic, discusses a number of historical results, and describes a number of open problems.

This is the 3rd of six new major reports which describe and motivate MIRI’s current research agenda at a high level.