Today we release a new technical report by Nate Soares and Benja Fallenstein, “Questions of reasoning under logical uncertainty.” If you’d like to discuss the paper, please do so here.
A logically uncertain reasoner would be able to reason as if they know both a programming language and a program, without knowing what the program outputs. Most practical reasoning involves some logical uncertainty, but no satisfactory theory of reasoning under logical uncertainty yet exists. A better theory of reasoning under logical uncertainty is needed in order to develop the tools necessary to construct highly reliable artificial reasoners. This paper introduces the topic, discusses a number of historical results, and describes a number of open problems.
This is the 3rd of six new major reports which describe and motivate MIRI’s current research agenda at a high level.