New papers dividing logical uncertainty into two subproblems

 |   |  Papers

I’m happy to announce two new technical results related to the problem of logical uncertainty, perhaps our most significant results from the past year. In brief, these results split the problem of logical uncertainty into two distinct subproblems, each of which we can now solve in isolation. The remaining problem, in light of these results, is to find a unified set of methods that solve both at once.

The solutions for each subproblem are available in two new papers, based on work spearheaded by Scott Garrabrant: “Inductive coherence1 and “Asymptotic convergence in online learning with unbounded delays.”2

To give some background on the problem: Modern probability theory models reasoners’ empirical uncertainty, their uncertainty about the state of a physical environment, e.g., “What’s behind this door?” However, it can’t represent reasoners’ logical uncertainty, their uncertainty about statements like “this Turing machine halts” or “the twin prime conjecture has a proof that is less than a gigabyte long.”3

Roughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you’re not allowed to assign positive probability to contradictions. In other words, modern probability theory assumes that all reasoners know all the consequences of all the things they know, even if deducing those consequences is intractable.

We want a generalization of probability theory that allows us to model reasoners that have uncertainty about statements that they have not yet evaluated. Furthermore, we want to understand how to assign “reasonable” probabilities to claims that are too expensive to evaluate.

Imagine an agent considering whether to use quicksort or mergesort to sort a particular dataset. They might know that quicksort typically runs faster than mergesort, but that doesn’t necessarily apply to the current dataset. They could in principle figure out which one uses fewer resources on this dataset, by running both of them and comparing, but that would defeat the purpose. Intuitively, they have a fair bit of knowledge that bears on the claim “quicksort runs faster than mergesort on this dataset,” but modern probability theory can’t tell us which information they should use and how.4


What does it mean for a reasoner to assign “reasonable probabilities” to claims that they haven’t computed, but could compute in principle? Without probability theory to guide us, we’re reduced to using intuition to identify properties that seem desirable, and then investigating which ones are possible. Intuitively, there are at least two properties we would want logically non-omniscient reasoners to exhibit:

1. They should be able to notice patterns in what is provable about claims, even before they can prove or disprove the claims themselves. For example, consider the claims “this Turing machine outputs an odd number” and “this Turing machine outputs an even number.” A good reasoner thinking about those claims should eventually recognize that they are mutually exclusive, and assign them probabilities that sum to at most 1, even before they can run the relevant Turing machine.

2. They should be able to notice patterns in sentence classes that are true with a certain frequency. For example, they should assign roughly 10% probability to “the 10100th digit of pi is a 7” in lieu of any information about the digit, after observing (but not proving) that digits of pi tend to be uniformly distributed.

MIRI’s work on logical uncertainty this past year can be very briefly summed up as “we figured out how to get these two properties individually, but found that it is difficult to get both at once.”

Inductive CoherenceInductive coherence,” which I co-authored with Garrabrant, Benya Fallenstein, and Abram Demski, shows how to get the first property. The abstract reads:

While probability theory is normally applied to external environments, there has been some recent interest in probabilistic modeling of the outputs of computations that are too expensive to run. Since mathematical logic is a powerful tool for reasoning about computer programs, we consider this problem from the perspective of integrating probability and logic.

Recent work on assigning probabilities to mathematical statements has used the concept of coherent distributions, which satisfy logical constraints such as the probability of a sentence and its negation summing to one. Although there are algorithms which converge to a coherent probability distribution in the limit, this yields only weak guarantees about finite approximations of these distributions. In our setting, this is a significant limitation: Coherent distributions assign probability one to all statements provable in a specific logical theory, such as Peano Arithmetic, which can prove what the output of any terminating computation is; thus, a coherent distribution must assign probability one to the output of any terminating computation.

To model uncertainty about computations, we propose to work with approximations to coherent distributions. We introduce inductive coherence, a strengthening of coherence that provides appropriate constraints on finite approximations, and propose an algorithm which satisfies this criterion.

Given a series of provably mutually exclusive sentences, or a series of sentences where each provably implies the next, an inductively coherent predictor’s probabilities eventually start respecting this pattern. This is true even if the predictor hasn’t been able to prove that the pattern holds yet; if it would be possible in principle to eventually prove each instance of the pattern, then the inductively coherent predictor will start recognizing it “before too long,” in a specific technical sense, even if the proofs themselves are very long.

Asymptotic ConvergenceAsymptotic convergence in online learning with unbounded delays,” which I co-authored with Garrabrant and Jessica Taylor, describes an algorithm with the second property. The abstract reads:

We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayes-optimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting.

The first property is about recognizing patterns about logical relationships between claims — saying “claim A implies claim B, so my probability on B must be at least my probability on A.” By contrast, the second property is about recognizing frequency patterns between similar claims — saying “I lack the resources to tell whether this claim is true, but 90% of similar claims have been true, so the base rate is 90%” (where part of the problem is figuring out what counts as a “similar claim”).

In this technical report, we model the latter task as an online learning problem, where a predictor observes the behavior of many small computations and has to predict the behavior of large computations. We give an algorithm that eventually assigns the “right” probabilities to every predictable subsequence of observations, in a specific technical sense.


Each paper is interesting in its own right, but for us, the exciting result is that we have teased apart and formalized two separate notions of what counts as “good reasoning” under logical uncertainty, both of which are compelling.

Furthermore, our approaches to formalizing these two notions are very different. “Inductive coherence” frames the problem in the traditional “unify logic with probability” setting, whereas “Asymptotic convergence in online learning with unbounded delays” fits more naturally into the online machine learning framework. The methods we found for solving the first problem don’t appear to help with the second problem, and vice versa. In fact, the two isolated solutions appear quite difficult to reconcile. The problem that these two papers leave open is: Can we get one algorithm that satisfies both properties at once?

 


 

Sign up to get updates on new MIRI technical results

Get notified every time a new technical paper is published.

 

 


  1. This work was originally titled “Uniform coherence”. This post has been updated to reflect the new terminology. 
  2. Garrabrant’s IAFF forum posts provide a record of how these results were originally developed, as a response to Ray Solomonoff’s theory of algorithmic probability. Concrete Failure of the Solomonoff Approach and The Entangled Benford Test lay groundwork for the “Asymptotic convergence…” problem, a limited early version of which was featured in the “Asymptotic logical uncertainty and the Benford test” report. Inductive coherence is defined in Uniform Coherence 2, and an example of an inductively coherent predictor is identified in The Modified Demski Prior is Uniformly Coherent
  3. This type of uncertainty is called “logical uncertainty” mainly for historical reasons. I think of it like this: We care about agents’ ability to reason about software systems, e.g., “this program will halt.” Those claims can be expressed in sentences of logic. The question “what probability does the agent assign to this machine halting?” then becomes “what probability does this agent assign to this particular logical sentence?” The truth of these statements could be determined in principle, but the agent may not have the resources to compute the answers in practice. 
  4. For more background on logical uncertainty, see Gaifman’s “Concerning measures in first-order calculi,” Garber’s “Old evidence and logical omniscience in Bayesian confirmation theory,” Hutter, Lloyd, Ng, and Uther’s “Probabilities on sentences in an expressive logic,” and Aaronson’s “Why philosophers should care about computational complexity.” 
  • ESRogs

    > Roughly speaking, if you give a classical probability distribution variables for statements that could be deduced in principle, then the axioms of probability theory force you to put probability either 0 or 1 on those statements, because you’re not allowed to assign positive probability to contradictions.

    I think I don’t quite understand this part. Suppose I say, “screw it” and go ahead and assign probabilities to computable statements. What breaks? Where do I get in trouble?

    And what’s the difference in the case where there’s something behind a door? There is some fact of the matter about what’s behind the door. In some sense isn’t my opening the door to check also just executing a computation?

    I think there must be some core concept I’m missing.

    • Chris Warburton

      > In some sense isn’t my opening the door to check also just executing a computation?

      Opening the door provides new information, independent of what you already knew (unless you already had exact knowledge of what was behind the door). Executing a “pure” computation (e.g. a Turing Machine) doesn’t provide such independent information: there is no interaction with the outside world (i.e. no input/output of bits). Rather, the definition of the computation is an exact description of the result; it’s just in an “inconvenient” form, which may require an arbitrary amount of effort to assign an exact number to.

      I’m not sure of the technicalities of how things “break down”, but intuitively I can see how assigning a probability to such results can cause contradictions: in a sense, we already “know” the result, so assigning a probability is asserting a contradiction.

      For example, assuming Andrew Wiles wasn’t mistaken, the statement of Fermat’s Last Theorem is, in a sense, equivalent to the statement “true”. If, prior to the proof’s discovery, we had assigned a probability of, say, 0.7 to Fermat’s Last Theorem (or, equivalently, that a Turing Machine searching for its proof would halt), we would in effect be assigning that probability to the statement “true”, which already has the probability 1. Of course, without the proof, we wouldn’t know whether we’re assigning probabilities 1 and 0.7 to “true”, or 0 and 0.7 to “false”, but both lead to contradictions. For example, by reflexivity, P(“true”) = P(“true”); substituting in our assignments, 0.7 = 1; we can use arithmetic to prove not(0.7 = 1), an hence have a contradiction.

  • amacfied

    Thanks for this summary.

  • Ted Sanders

    I don’t understand the distinction. To illustrate my confusion, consider the chaotic flipping of a coin.

    Scenario 1: You flip a coin and cover the result without looking. You assign odds of 50/50.
    Scenario 2: You flip, exactly measure the initial conditions of the flip, and cover the result without looking. Because the system is chaotic, it’s hard to compute the trajectory. You still assign odds of 50/50.
    Scenario 3: You flip, exactly exactly measure the initial conditions of the flip, and cover the result without looking. But this time you power up a supercomputer and calculate the trajectory. Before looking at the supercomputer’s results, you assign odds of 50/50.

    To me, these are all feel the same. But it sounds like you categorize #1 under empirical uncertainty (because you have to gain information to lower your uncertainty) and #3 under logical uncertainty (because there is an answer that you could find with no new information despite it being hard to compute). Do you also classify #2 under logical uncertainty?

    Also, from one perspective, running an experiment or performing a measurement is just a way of doing a physical computation. And from this perspective, there seems to be no difference between empirical uncertainty and logical uncertainty.

    Clearly, though, you’ve thought about this more than I have. What do you think about my thoughts?

    • http://www.nothingismere.com/ Rob Bensinger

      Yes, logical uncertainty also applies to, e.g., knowing the boundary conditions and dynamical laws of a deterministic universe, but not having time to compute the state the universe evolves into. Rather than drawing a sharp line between empirical and logical uncertainty and saying that probability theory only applies to empirical uncertainty, one could say that logically uncertain reasoning is about answering arbitrary questions with finite computational resources, and it approximates probability-theoretic reasoning increasingly closely as one’s confidence in all the theorems of mathematics approaches 1 and one’s confidence in all the contradictions approaches 0.