New report: “Toward Idealized Decision Theory”

 |   |  Papers

Toward IdealizedToday we release a new technical report by Nate Soares and Benja Fallenstein, “Toward idealized decision theory.” If you’d like to discuss the paper, please do so here.


This paper motivates the study of decision theory as necessary for aligning smarter-than-human artificial systems with human interests. We discuss the shortcomings of two standard formulations of decision theory, and demonstrate that they cannot be used to describe an idealized decision procedure suitable for approximation by artificial systems. We then explore the notions of strategy selection and logical counterfactuals, two recent insights into decision theory that point the way toward promising paths for future research.

This is the 2nd of six new major reports which describe and motivate MIRI’s current research agenda at a high level. The first was our Corrigibility paper, which was accepted to the AI & Ethics workshop at AAAI-2015. We will also soon be releasing a technical agenda overview document and an annotated bibliography for this emerging field of research.

  • Alexander Appel

    Since UDT searches for proofs of the form A()=x -> U()=y in some formal system (let’s say PA), wouldn’t establishing facts about what the proof-searching process would or wouldn’t output, in advance, require something stronger than PA?

    Pretty much, logical omniscience might not exist, because it would require the agent to also have logical omniscience about what its own logical omniscience module would output, which seems to run into some incompleteness/halting problem issues.

    I’m probably horribly wrong about all this, but it seems like a UDT agent will always have difficulty predicting what its output would be in advance due to some sort of halting/incompleteness issue, and this uncertainty in what it will output in advance is what enables it to select from among several different options.

    If I am terribly confused about how it works, please let me know.