New papers dividing logical uncertainty into two subproblems

 |   |  Papers

I’m happy to announce two new technical results related to the problem of logical uncertainty, perhaps our most significant results from the past year. In brief, these results split the problem of logical uncertainty into two distinct subproblems, each of which we can now solve in isolation. The remaining problem, in light of these results, is to find a unified set of methods that solve both at once.

The solutions for each subproblem are available in two new papers, based on work spearheaded by Scott Garrabrant: “Inductive coherence1 and “Asymptotic convergence in online learning with unbounded delays.”2

To give some background on the problem: Modern probability theory models reasoners’ empirical uncertainty, their uncertainty about the state of a physical environment, e.g., “What’s behind this door?” However, it can’t represent reasoners’ logical uncertainty, their uncertainty about statements like “this Turing machine halts” or “the twin prime conjecture has a proof that is less than a gigabyte long.”3

Roughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you’re not allowed to assign positive probability to contradictions. In other words, modern probability theory assumes that all reasoners know all the consequences of all the things they know, even if deducing those consequences is intractable.

We want a generalization of probability theory that allows us to model reasoners that have uncertainty about statements that they have not yet evaluated. Furthermore, we want to understand how to assign “reasonable” probabilities to claims that are too expensive to evaluate.

Imagine an agent considering whether to use quicksort or mergesort to sort a particular dataset. They might know that quicksort typically runs faster than mergesort, but that doesn’t necessarily apply to the current dataset. They could in principle figure out which one uses fewer resources on this dataset, by running both of them and comparing, but that would defeat the purpose. Intuitively, they have a fair bit of knowledge that bears on the claim “quicksort runs faster than mergesort on this dataset,” but modern probability theory can’t tell us which information they should use and how.4


What does it mean for a reasoner to assign “reasonable probabilities” to claims that they haven’t computed, but could compute in principle? Without probability theory to guide us, we’re reduced to using intuition to identify properties that seem desirable, and then investigating which ones are possible. Intuitively, there are at least two properties we would want logically non-omniscient reasoners to exhibit:

1. They should be able to notice patterns in what is provable about claims, even before they can prove or disprove the claims themselves. For example, consider the claims “this Turing machine outputs an odd number” and “this Turing machine outputs an even number.” A good reasoner thinking about those claims should eventually recognize that they are mutually exclusive, and assign them probabilities that sum to at most 1, even before they can run the relevant Turing machine.

2. They should be able to notice patterns in sentence classes that are true with a certain frequency. For example, they should assign roughly 10% probability to “the 10100th digit of pi is a 7” in lieu of any information about the digit, after observing (but not proving) that digits of pi tend to be uniformly distributed.

MIRI’s work on logical uncertainty this past year can be very briefly summed up as “we figured out how to get these two properties individually, but found that it is difficult to get both at once.”

Inductive CoherenceInductive coherence,” which I co-authored with Garrabrant, Benya Fallenstein, and Abram Demski, shows how to get the first property. The abstract reads:

While probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic.

Recent work on assigning probabilities to mathematical statements has used the concept of coherent distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation.

To model uncertainty about computations, we propose to work with approximations to coherent distributions. We introduce inductive coherence, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.

Given a series of provably mutually exclusive sentences, or a series of sentences where each provably implies the next, an inductively coherent predictor’s probabilities eventually start respecting this pattern. This is true even if the predictor hasn’t been able to prove that the pattern holds yet; if it would be possible in principle to eventually prove each instance of the pattern, then the inductively coherent predictor will start recognizing it “before too long,” in a specific technical sense, even if the proofs themselves are very long.

Asymptotic ConvergenceAsymptotic convergence in online learning with unbounded delays,” which I co-authored with Garrabrant and Jessica Taylor, describes an algorithm with the second property. The abstract reads:

We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.

The first property is about recognizing patterns about logical relationships between claims — saying “claim A implies claim B, so my probability on B must be at least my probability on A.” By contrast, the second property is about recognizing frequency patterns between similar claims — saying “I lack the resources to tell whether this claim is true, but 90% of similar claims have been true, so the base rate is 90%” (where part of the problem is figuring out what counts as a “similar claim”).

In this technical report, we model the latter task as an online learning problem, where a predictor observes the behavior of many small computations and has to predict the behavior of large computations. We give an algorithm that eventually assigns the “right” probabilities to every predictable subsequence of observations, in a specific technical sense.


Each paper is interesting in its own right, but for us, the exciting result is that we have teased apart and formalized two separate notions of what counts as “good reasoning” under logical uncertainty, both of which are compelling.

Furthermore, our approaches to formalizing these two notions are very different. “Inductive coherence” frames the problem in the traditional “unify logic with probability” setting, whereas “Asymptotic convergence in online learning with unbounded delays” fits more naturally into the online machine learning framework. The methods we found for solving the first problem don’t appear to help with the second problem, and vice versa. In fact, the two isolated solutions appear quite difficult to reconcile. The problem that these two papers leave open is: Can we get one algorithm that satisfies both properties at once?

 


 

Sign up to get updates on new MIRI technical results

Get notified every time a new technical paper is published.

 

 


  1. This work was originally titled “Uniform coherence”. This post has been updated to reflect the new terminology. 
  2. Garrabrant’s IAFF forum posts provide a record of how these results were originally developed, as a response to Ray Solomonoff’s theory of algorithmic probability. Concrete Failure of the Solomonoff Approach and The Entangled Benford Test lay groundwork for the “Asymptotic convergence…” problem, a limited early version of which was featured in the “Asymptotic logical uncertainty and the Benford test” report. Inductive coherence is defined in Uniform Coherence 2, and an example of an inductively coherent predictor is identified in The Modified Demski Prior is Uniformly Coherent
  3. This type of uncertainty is called “logical uncertainty” mainly for historical reasons. I think of it like this: We care about agents’ ability to reason about software systems, e.g., “this program will halt.” Those claims can be expressed in sentences of logic. The question “what probability does the agent assign to this machine halting?” then becomes “what probability does this agent assign to this particular logical sentence?” The truth of these statements could be determined in principle, but the agent may not have the resources to compute the answers in practice. 
  4. For more background on logical uncertainty, see Gaifman’s “Concerning measures in first-order calculi,” Garber’s “Old evidence and logical omniscience in Bayesian confirmation theory,” Hutter, Lloyd, Ng, and Uther’s “Probabilities on sentences in an expressive logic,” and Aaronson’s “Why philosophers should care about computational complexity.”