March Newsletter

 |   |  Newsletters

 

newsletterheader_sm_c.1

Greetings From The Executive Director

Friends,

As previously announced on our blog, the Singularity Institute has been renamed as the Machine Intelligence Research Institute (MIRI). Naturally, both our staff and our supporters have positive associations with our original name, the “Singularity Institute.” As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past several weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general. University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location).

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

See our new website at Intelligence.org. The site guide here.

Our emails have changed, too. Be sure to update your email Contacts list with our new email addresses, e.g. luke@intelligence.org. Our previous email addresses at singinst.org and singularity.org no longer work. You can see all our new email addresses on the Team page.

Cheers,

Luke Muehlhauser

Executive Director

Read more »

Upcoming MIRI Research Workshops

 |   |  News

From November 11-18, 2012, we held (what we now call) the 1st MIRI Workshop on Logic, Probability, and Reflection. This workshop had four participants:

The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).

These results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months.

In the meantime, MIRI is preparing for the 2nd MIRI Workshop on Logic, Probability, and Reflection, to take place from April 3-24, 2013. This workshop will be broken into two sections. The first section (Apr 3-11) will bring together the 1st workshop’s participants and 8 additional participants:

The second section (Apr 12-24) will consist solely of the 4 participants from the 1st workshop.

Participants of this 2nd workshop will continue to work on the foundations of reflective reasoning, for example Gödelian obstacles to reflection, and decision algorithms for reflective agents (e.g. TDT).

Additional MIRI research workshops are also tentatively planned for the summer and fall of 2013.

Update: An early draft of the paper describing the first result from the 1st workshop is now available here.

Welcome to Intelligence.org

 |   |  News

Welcome to the new home for the Machine Intelligence Research Institute (MIRI), formerly called “The Singularity Institute.”

The new design (from Katie Hartman, who also designed the new site for CFAR) reflects our recent shift in focus from “movement-building” to technical research. Our research and our research advisors are featured prominently on the home page, and our network of research associates are included on the Team page.

Getting involved is also clearer, with easy-to-find pages for applying to be a volunteer, an intern, a visiting fellow, or a research fellow.

Our About page hosts things like our transparency page, our top contributors list, our About page hosts things like our transparency page, our top contributors list, our new press kit, and our archive of all Singularity Summit talk videos, audio, and transcripts from 2006-2012. (The Summit was recently acquired by Singularity University.)

Follow our blog to keep up with the latest news and analyses. Recent analyses include Yudkowsky on Logical Uncertainty and Yudkowsky on “What Can We Do Now?”

We’ll be adding additional content in the next few months, so stay tuned!

We are now the “Machine Intelligence Research Institute” (MIRI)

 |   |  News

When Singularity University (SU) acquired the Singularity Summit from us in December, we also agreed to change the name of our institute to avoid brand confusion between the Singularity Institute and Singularity University. After much discussion and market research, we’ve chosen our new name. We are now the Machine Intelligence Research Institute (MIRI).

Naturally, both our staff members and supporters have positive associations with our original name, the “Singularity Institute for Artificial Intelligence,” or “Singularity Institute” for short. As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past few weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general.

University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location). For example:

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

We’ll be operating from Singularity.org for a little while longer, but sometime before March 5th we’ll launch a new website, under the new name, at a new domain name: Intelligence.org. (We have thus far been unable to acquire some other fitting domain names, including miri.org.)

We’ll let you know when we’ve moved our email accounts to the new domain. All existing newsletter subscribers will continue to receive our newsletter after the name change.

Many thanks again to all our supporters who are sticking with us through this transition in branding (from SIAI to MIRI) and our transition in activities (a singular focus on research after passing our rationality work to CFAR and the Summit to SU). We hope you’ll come to like our new name as much as we do!

Machine Intelligence Research Institute

Yudkowsky on Logical Uncertainty

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein tackled had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.

Eliezer: See also How to convince me that 2 + 2 = 3.

Interviewer: Exactly. Even within a probabilistic system like a Bayes net, there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”

Eliezer: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.

Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.

Interviewer: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?

Eliezer: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?

Interviewer: Do you see much of a relation between the two problems?

Eliezer: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.

What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.

Yudkowsky on “What can we do now?”

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: Suppose you’re talking to a smart mathematician who looks like the kind of person who might have the skills needed to work on a Friendly AI team. But, he says, “I understand the general problem of AI risk, but I just don’t believe that you can know so far in advance what in particular is useful to do. Any of the problems that you’re naming now are not particularly likely to be the ones that are relevant 30 or 80 years from now when AI is developed. Any technical research we do now depends on a highly conjunctive set of beliefs about the world, and we shouldn’t have so much confidence that we can see that far into the future.” What is your reply to the mathematician?

Eliezer: I’d start by having them read a description of a particular technical problem we’re working on, for example the “Löb Problem.” I’m writing up a description of that now. So I’d show the mathematician that description and say “No, this issue of trying to have an AI write a similar AI seems like a fairly fundamental one, and the Löb Problem blocks it. The fact that we can’t figure out how to do these things — even given infinite computing power — is alarming.”

A more abstract argument would be something along the lines of, “Are you sure the same way of thinking wouldn’t prevent you from working on any important problem? Are you sure you wouldn’t be going back in time and telling Alan Turing to not invent Turing machines because who knows whether computers will really work like that? They didn’t work like that. Real computers don’t work very much like the formalism, but Turing’s work was useful anyway.”

Interviewer: You and I both know people who are very well informed about AI risk, but retain more uncertainty than you do about what the best thing to do about it today is. Maybe there are lots of other promising interventions out there, like pursuing cognitive enhancement, or doing FHI-style research looking for crucial considerations that we haven’t located yet — like Drexler discovering molecular nanotechnology, or Shulman discovering iterated embryo selection for radical intelligence amplification. Or, perhaps we should focus on putting the safety memes out into the AGI community because it’s too early to tell, again, exactly which problems are going to matter, especially if you have a longer AI time horizon. What’s your response to that line of reasoning?

Eliezer: Work on whatever your current priority is, after an hour of meta reasoning but not a year of meta reasoning.  If you’re still like, “No, no, we must think more meta” after a year, then I don’t believe you’re the sort of person who will ever act.

For example, Paul Christiano isn’t making this mistake, since Paul is working on actual FAI problems while looking for other promising interventions. I don’t have much objection to that. If he then came up with some particular intervention which he thought was higher priority, I’d ask about the specific case.

Nick Bostrom isn’t making this mistake, either. He’s doing lots of meta-strategy work, but he also does work on anthropic probabilities and the parliamentary model for normative uncertainty and other things that are object-level, and he hosts people who like Anders Sandberg who write papers about uploading timelines that are actually relevant to our policy decisions.

When people constantly say “maybe we should do some other thing,” I would say, “Come to an interim decision, start acting on the interim decision, and revisit this decision as necessary.” But if you’re the person who always tries to go meta and only thinks meta because there might be some better thing, you’re not ever going to actually do something about the problem.

2012 Winter Matching Challenge a Success!

 |   |  News

Thanks to our dedicated supporters, we met our goal for our 2012 Winter Fundraiser. Thank you!

The fundraiser ran for 45 days, from December 6, 2012 to January 20, 2013.

We met our $115,000 goal, raising a total of $230,000 for our operations in 2013.

Every donation that the Machine Intelligence Research Institute receives is powerful support for our mission — ensuring that the creation of smarter-than-human intelligence benefits human society.

New Transcript: Eliezer Yudkowsky and Massimo Pigliucci on the Intelligence Explosion

 |   |  News

In this 2010 conversation hosted by bloggingheads.tvEliezer Yudkowsky and Massimo Pigliucci attempt to unpack the fundamental assumptions involved in determining the plausability of a technological singularity.

A transcript of the conversation is now available here, thanks to Ethan Dickinson and Patrick Stevens of MIRIvolunteers.org. A video of the conversation can be found at the bloggingheads website.