The Lean Nonprofit

 |   |  MIRI Strategy

Can Lean Startup methods work for nonprofits?

The Lean Startup‘s author Eric Ries seems to think so:

A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty… Anyone who is creating a new product or business under conditions of extreme uncertainty is an entrepreneur whether he or she knows it or not, and whether working in a government agency, a venture-backed company, a nonprofit, or a decidedly for-profit company with financial investors.

In the past year, I helped launch one new nonprofit (Center for Applied Rationality), I massively overhauled one older nonprofit (MIRI), and I consulted with many nonprofit CEOs and directors. Now I’d like to share some initial thoughts on the idea of a “Lean Nonprofit.”

Read more »

Early draft of naturalistic reflection paper

 |   |  Papers

Update: See Reflection in Probabilistic Logic for more details on how this result relates to MIRI’s research mission.

In a recent blog post we described one of the results of our 1st MIRI Workshop on Logic, Probability, and Reflection:

The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning.

In short, the result described is a “loophole” in Tarski’s undefinability theorem (1936).

An early draft of the paper describing this result is now available: download it here. Its authors are Paul Christiano (UC Berkeley), Eliezer Yudkowsky (MIRI), Marcello Herreshoff (Google), and Mihály Bárász (Google). An excerpt from the paper is included below:

Unfortunately, it is impossible for any expressive language to contain its own truth predicate True

There are a few standard responses to this challenge.

The first and most popular is to work with meta-languages…

A second approach is to accept that some sentences, such as the liar sentence G, are neither true nor false…

Although this construction successfully dodges the “undefinability of truth” it is somewhat unsatisfying. There is no predicate in these languages to test if a sentence… is undefined, and there is no bound on the number of sentences which remain undefined. In fact, if we are specifically concerned with self-reference, then a great number of properties of interest (and not just pathological counterexamples) become undefined.

In this paper we show that it is possible to perform a similar construction over probabilistic logic. Though a language cannot contain its own truth predicate True, it can nevertheless contain its own “subjective probability” function P. The assigned probabilities can be reflectively consistent in the sense of an appropriate analog of the reflection property 1. In practice, most meaningful assertions must already be treated probabilistically, and very little is lost by allowing some sentences to have probabilities intermediate between 0 and 1.

Another paper showing an application of this result to set theory is forthcoming.

March Newsletter

 |   |  Newsletters

 

newsletterheader_sm_c.1

Greetings From The Executive Director

Friends,

As previously announced on our blog, the Singularity Institute has been renamed as the Machine Intelligence Research Institute (MIRI). Naturally, both our staff and our supporters have positive associations with our original name, the “Singularity Institute.” As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past several weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general. University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location).

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

See our new website at Intelligence.org. The site guide here.

Our emails have changed, too. Be sure to update your email Contacts list with our new email addresses, e.g. luke@intelligence.org. Our previous email addresses at singinst.org and singularity.org no longer work. You can see all our new email addresses on the Team page.

Cheers,

Luke Muehlhauser

Executive Director

Read more »

Upcoming MIRI Research Workshops

 |   |  News

From November 11-18, 2012, we held (what we now call) the 1st MIRI Workshop on Logic, Probability, and Reflection. This workshop had four participants:

The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).

These results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months.

In the meantime, MIRI is preparing for the 2nd MIRI Workshop on Logic, Probability, and Reflection, to take place from April 3-24, 2013. This workshop will be broken into two sections. The first section (Apr 3-11) will bring together the 1st workshop’s participants and 8 additional participants:

The second section (Apr 12-24) will consist solely of the 4 participants from the 1st workshop.

Participants of this 2nd workshop will continue to work on the foundations of reflective reasoning, for example Gödelian obstacles to reflection, and decision algorithms for reflective agents (e.g. TDT).

Additional MIRI research workshops are also tentatively planned for the summer and fall of 2013.

Update: An early draft of the paper describing the first result from the 1st workshop is now available here.

Welcome to Intelligence.org

 |   |  News

Welcome to the new home for the Machine Intelligence Research Institute (MIRI), formerly called “The Singularity Institute.”

The new design (from Katie Hartman, who also designed the new site for CFAR) reflects our recent shift in focus from “movement-building” to technical research. Our research and our research advisors are featured prominently on the home page, and our network of research associates are included on the Team page.

Getting involved is also clearer, with easy-to-find pages for applying to be a volunteer, an intern, a visiting fellow, or a research fellow.

Our About page hosts things like our transparency page, our top contributors list, our About page hosts things like our transparency page, our top contributors list, our new press kit, and our archive of all Singularity Summit talk videos, audio, and transcripts from 2006-2012. (The Summit was recently acquired by Singularity University.)

Follow our blog to keep up with the latest news and analyses. Recent analyses include Yudkowsky on Logical Uncertainty and Yudkowsky on “What Can We Do Now?”

We’ll be adding additional content in the next few months, so stay tuned!

We are now the “Machine Intelligence Research Institute” (MIRI)

 |   |  News

When Singularity University (SU) acquired the Singularity Summit from us in December, we also agreed to change the name of our institute to avoid brand confusion between the Singularity Institute and Singularity University. After much discussion and market research, we’ve chosen our new name. We are now the Machine Intelligence Research Institute (MIRI).

Naturally, both our staff members and supporters have positive associations with our original name, the “Singularity Institute for Artificial Intelligence,” or “Singularity Institute” for short. As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past few weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general.

University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location). For example:

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

We’ll be operating from Singularity.org for a little while longer, but sometime before March 5th we’ll launch a new website, under the new name, at a new domain name: Intelligence.org. (We have thus far been unable to acquire some other fitting domain names, including miri.org.)

We’ll let you know when we’ve moved our email accounts to the new domain. All existing newsletter subscribers will continue to receive our newsletter after the name change.

Many thanks again to all our supporters who are sticking with us through this transition in branding (from SIAI to MIRI) and our transition in activities (a singular focus on research after passing our rationality work to CFAR and the Summit to SU). We hope you’ll come to like our new name as much as we do!

Machine Intelligence Research Institute

Yudkowsky on Logical Uncertainty

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein tackled had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.

Eliezer: See also How to convince me that 2 + 2 = 3.

Interviewer: Exactly. Even within a probabilistic system like a Bayes net, there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”

Eliezer: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.

Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.

Interviewer: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?

Eliezer: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?

Interviewer: Do you see much of a relation between the two problems?

Eliezer: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.

What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.

Yudkowsky on “What can we do now?”

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: Suppose you’re talking to a smart mathematician who looks like the kind of person who might have the skills needed to work on a Friendly AI team. But, he says, “I understand the general problem of AI risk, but I just don’t believe that you can know so far in advance what in particular is useful to do. Any of the problems that you’re naming now are not particularly likely to be the ones that are relevant 30 or 80 years from now when AI is developed. Any technical research we do now depends on a highly conjunctive set of beliefs about the world, and we shouldn’t have so much confidence that we can see that far into the future.” What is your reply to the mathematician?

Eliezer: I’d start by having them read a description of a particular technical problem we’re working on, for example the “Löb Problem.” I’m writing up a description of that now. So I’d show the mathematician that description and say “No, this issue of trying to have an AI write a similar AI seems like a fairly fundamental one, and the Löb Problem blocks it. The fact that we can’t figure out how to do these things — even given infinite computing power — is alarming.”

A more abstract argument would be something along the lines of, “Are you sure the same way of thinking wouldn’t prevent you from working on any important problem? Are you sure you wouldn’t be going back in time and telling Alan Turing to not invent Turing machines because who knows whether computers will really work like that? They didn’t work like that. Real computers don’t work very much like the formalism, but Turing’s work was useful anyway.”

Interviewer: You and I both know people who are very well informed about AI risk, but retain more uncertainty than you do about what the best thing to do about it today is. Maybe there are lots of other promising interventions out there, like pursuing cognitive enhancement, or doing FHI-style research looking for crucial considerations that we haven’t located yet — like Drexler discovering molecular nanotechnology, or Shulman discovering iterated embryo selection for radical intelligence amplification. Or, perhaps we should focus on putting the safety memes out into the AGI community because it’s too early to tell, again, exactly which problems are going to matter, especially if you have a longer AI time horizon. What’s your response to that line of reasoning?

Eliezer: Work on whatever your current priority is, after an hour of meta reasoning but not a year of meta reasoning.  If you’re still like, “No, no, we must think more meta” after a year, then I don’t believe you’re the sort of person who will ever act.

For example, Paul Christiano isn’t making this mistake, since Paul is working on actual FAI problems while looking for other promising interventions. I don’t have much objection to that. If he then came up with some particular intervention which he thought was higher priority, I’d ask about the specific case.

Nick Bostrom isn’t making this mistake, either. He’s doing lots of meta-strategy work, but he also does work on anthropic probabilities and the parliamentary model for normative uncertainty and other things that are object-level, and he hosts people who like Anders Sandberg who write papers about uploading timelines that are actually relevant to our policy decisions.

When people constantly say “maybe we should do some other thing,” I would say, “Come to an interim decision, start acting on the interim decision, and revisit this decision as necessary.” But if you’re the person who always tries to go meta and only thinks meta because there might be some better thing, you’re not ever going to actually do something about the problem.