Yudkowsky on Logical Uncertainty
A paraphrased transcript of a conversation with Eliezer Yudkowsky.
Interviewer: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein tackled had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.
Eliezer: See also How to convince me that 2 + 2 = 3.
Interviewer: Exactly. Even within a probabilistic system like a Bayes net, there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”
Eliezer: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.
Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.
Interviewer: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?
Eliezer: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?
Interviewer: Do you see much of a relation between the two problems?
Eliezer: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.
What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.