We have released a new paper on logical uncertainty, co-authored by Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd: “**Asymptotic logical uncertainty and the Benford test**.”

Garrabrant gives some background on his approach to logical uncertainty on the Intelligent Agent Foundations Forum:

The main goal of logical uncertainty is to learn how to assign probabilities to logical sentences which have not yet been proven true or false.

One common approach is to change the question, assume logical omniscience and only try to assign probabilities to the sentences that are independent of your axioms (in hopes that this gives insight to the other problem). Another approach is to limit yourself to a finite set of sentences or deductive rules, and assume logical omniscience on them. Yet another approach is to try to define and understand logical counterfactuals, so you can try to assign probabilities to inconsistent counterfactual worlds.

One thing all three of these approaches have in common is they try to allow (a limited form of) logical omniscience. This makes a lot of sense. We want a system that not only assigns decent probabilities, but which we can formally prove has decent behavior. By giving the system a type of logical omniscience, you make it predictable, which allows you to prove things about it.

However, there is another way to make it possible to prove things about a logical uncertainty system. We can take a program which assigns probabilities to sentences, and let it run forever. We can then ask about whether or not the system eventually gives good probabilities.

At first, it seems like this approach cannot work for logical uncertainty. Any machine which searches through all possible proofs will eventually give a good probability (1 or 0) to any provable or disprovable sentence. To counter this, as we give the machine more and more time to think, we have to ask it harder and harder questions.

We therefore have to analyze the machine’s behavior not on individual sentences, but on infinite sequences of sentences. For example, instead of asking whether or not the machine quickly assigns 1/10 to the probability that the 3↑↑↑↑3^{rd} digit of π is a 5 we look at the sequence:

*a*_{n}:= the probability the machine assigns at timestep 2^{n} to the *n*↑↑↑↑*n*^{th} digit of π being 5,

and ask whether or not this sequence converges to 1/10.

Benford’s law is the observation that the first digit in base 10 of various random numbers (e.g., random powers of 3) is likely to be small: the digit 1 comes first about 30% of the time, 2 about 18% of the time, and so on; 9 is the leading digit only 5% of the time. In their paper, Garrabrant et al. pick the *Benford test* as a concrete example of logically uncertain reasoning, similar to the π example: a machine passes the test iff it consistently assigns the correct subjective probability to “The first digit is a 1.” for the number 3 to the power *f*(*n*), where *f* is a fast-growing function and *f*(*n*) cannot be quickly computed.

Garrabrant et al.’s new paper describes an algorithm that passes the Benford test in a nontrivial way by searching for infinite sequences of sentences whose truth-values cannot be distinguished from the output of a weighted coin.

In other news, the papers “Toward idealized decision theory” and “Reflective oracles: A foundation for classical game theory” are now available on arXiv. We’ll be presenting a version of the latter paper with a slightly altered title (“Reflective oracles: A foundation for game theory in artificial intelligence”) at LORI-V next month.

**Update June 12, 2016**: “Asymptotic logical uncertainty and the Benford test” has been accepted to AGI-16.

#### Sign up to get updates on new MIRI technical results

*Get notified every time a new technical paper is published.*