Welcome to Intelligence.org

 |   |  News

Welcome to the new home for the Machine Intelligence Research Institute (MIRI), formerly called “The Singularity Institute.”

The new design (from Katie Hartman, who also designed the new site for CFAR) reflects our recent shift in focus from “movement-building” to technical research. Our research and our research advisors are featured prominently on the home page, and our network of research associates are included on the Team page.

Getting involved is also clearer, with easy-to-find pages for applying to be a volunteer, an intern, a visiting fellow, or a research fellow.

Our About page hosts things like our transparency page, our top contributors list, our About page hosts things like our transparency page, our top contributors list, our new press kit, and our archive of all Singularity Summit talk videos, audio, and transcripts from 2006-2012. (The Summit was recently acquired by Singularity University.)

Follow our blog to keep up with the latest news and analyses. Recent analyses include Yudkowsky on Logical Uncertainty and Yudkowsky on “What Can We Do Now?”

We’ll be adding additional content in the next few months, so stay tuned!

We are now the “Machine Intelligence Research Institute” (MIRI)

 |   |  News

When Singularity University (SU) acquired the Singularity Summit from us in December, we also agreed to change the name of our institute to avoid brand confusion between the Singularity Institute and Singularity University. After much discussion and market research, we’ve chosen our new name. We are now the Machine Intelligence Research Institute (MIRI).

Naturally, both our staff members and supporters have positive associations with our original name, the “Singularity Institute for Artificial Intelligence,” or “Singularity Institute” for short. As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past few weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general.

University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location). For example:

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

We’ll be operating from Singularity.org for a little while longer, but sometime before March 5th we’ll launch a new website, under the new name, at a new domain name: Intelligence.org. (We have thus far been unable to acquire some other fitting domain names, including miri.org.)

We’ll let you know when we’ve moved our email accounts to the new domain. All existing newsletter subscribers will continue to receive our newsletter after the name change.

Many thanks again to all our supporters who are sticking with us through this transition in branding (from SIAI to MIRI) and our transition in activities (a singular focus on research after passing our rationality work to CFAR and the Summit to SU). We hope you’ll come to like our new name as much as we do!

Machine Intelligence Research Institute

Yudkowsky on Logical Uncertainty

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein tackled had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.

Eliezer: See also How to convince me that 2 + 2 = 3.

Interviewer: Exactly. Even within a probabilistic system like a Bayes net, there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”

Eliezer: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.

Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.

Interviewer: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?

Eliezer: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?

Interviewer: Do you see much of a relation between the two problems?

Eliezer: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.

What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.

Yudkowsky on “What can we do now?”

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: Suppose you’re talking to a smart mathematician who looks like the kind of person who might have the skills needed to work on a Friendly AI team. But, he says, “I understand the general problem of AI risk, but I just don’t believe that you can know so far in advance what in particular is useful to do. Any of the problems that you’re naming now are not particularly likely to be the ones that are relevant 30 or 80 years from now when AI is developed. Any technical research we do now depends on a highly conjunctive set of beliefs about the world, and we shouldn’t have so much confidence that we can see that far into the future.” What is your reply to the mathematician?

Eliezer: I’d start by having them read a description of a particular technical problem we’re working on, for example the “Löb Problem.” I’m writing up a description of that now. So I’d show the mathematician that description and say “No, this issue of trying to have an AI write a similar AI seems like a fairly fundamental one, and the Löb Problem blocks it. The fact that we can’t figure out how to do these things — even given infinite computing power — is alarming.”

A more abstract argument would be something along the lines of, “Are you sure the same way of thinking wouldn’t prevent you from working on any important problem? Are you sure you wouldn’t be going back in time and telling Alan Turing to not invent Turing machines because who knows whether computers will really work like that? They didn’t work like that. Real computers don’t work very much like the formalism, but Turing’s work was useful anyway.”

Interviewer: You and I both know people who are very well informed about AI risk, but retain more uncertainty than you do about what the best thing to do about it today is. Maybe there are lots of other promising interventions out there, like pursuing cognitive enhancement, or doing FHI-style research looking for crucial considerations that we haven’t located yet — like Drexler discovering molecular nanotechnology, or Shulman discovering iterated embryo selection for radical intelligence amplification. Or, perhaps we should focus on putting the safety memes out into the AGI community because it’s too early to tell, again, exactly which problems are going to matter, especially if you have a longer AI time horizon. What’s your response to that line of reasoning?

Eliezer: Work on whatever your current priority is, after an hour of meta reasoning but not a year of meta reasoning.  If you’re still like, “No, no, we must think more meta” after a year, then I don’t believe you’re the sort of person who will ever act.

For example, Paul Christiano isn’t making this mistake, since Paul is working on actual FAI problems while looking for other promising interventions. I don’t have much objection to that. If he then came up with some particular intervention which he thought was higher priority, I’d ask about the specific case.

Nick Bostrom isn’t making this mistake, either. He’s doing lots of meta-strategy work, but he also does work on anthropic probabilities and the parliamentary model for normative uncertainty and other things that are object-level, and he hosts people who like Anders Sandberg who write papers about uploading timelines that are actually relevant to our policy decisions.

When people constantly say “maybe we should do some other thing,” I would say, “Come to an interim decision, start acting on the interim decision, and revisit this decision as necessary.” But if you’re the person who always tries to go meta and only thinks meta because there might be some better thing, you’re not ever going to actually do something about the problem.

2012 Winter Matching Challenge a Success!

 |   |  News

Thanks to our dedicated supporters, we met our goal for our 2012 Winter Fundraiser. Thank you!

The fundraiser ran for 45 days, from December 6, 2012 to January 20, 2013.

We met our $115,000 goal, raising a total of $230,000 for our operations in 2013.

Every donation that the Machine Intelligence Research Institute receives is powerful support for our mission — ensuring that the creation of smarter-than-human intelligence benefits human society.

New Transcript: Eliezer Yudkowsky and Massimo Pigliucci on the Intelligence Explosion

 |   |  News

In this 2010 conversation hosted by bloggingheads.tvEliezer Yudkowsky and Massimo Pigliucci attempt to unpack the fundamental assumptions involved in determining the plausability of a technological singularity.

A transcript of the conversation is now available here, thanks to Ethan Dickinson and Patrick Stevens of MIRIvolunteers.org. A video of the conversation can be found at the bloggingheads website.

January 2013 Newsletter

 |   |  Newsletters

Greetings from the Executive Director

Dear friends of the Machine Intelligence Research Institute,

It’s been just over one year since I took the reins at the Machine Intelligence Research Institute. Looking back, I must say I’m proud of what we accomplished in the last year.

Consider the “top priorities for 2011-2012” from our August 2011 strategic plan. The first priority was “public-facing research on creating a positive singularity.” On this front, we did so well that MIRI had more peer-reviewed publications in 2012 than in all past years combined (well, except for the fact that some publications scheduled for 2012 have been delayed until 2013, but you can still download preprints of those publications from our research page).

Our second priority was “outreach / education / fundraising.” Outreach and education was mostly achieved through the Singularity Summit and through the new Center for Applied Rationality, which was spun out of the Machine Intelligence Research Institute but is now its own 501c3 organization running entirely from its own funding. As for fundraising: 2012 was our most successful year yet.

Our third priority was “improved organizational effectiveness.” Here, we grew by leaps and bounds throughout 2012. Throughout the year, we built our first comprehensive donor database (to improve donor relations), launched a regular newsletter (to improve public communication), instituted best practices in management and accounting throughout the organization, began tracking costs and predicted benefits for all major projects, started renting a new office in Berkeley that now bustles with activity every day, updated the design and content on our website, gained $40,000/mo of free Google Adwords directing traffic to MIRI web properties, and more.

Our fourth priority was to run our annual Singularity Summit. We were pleased not only to run our most professional Summit yet, but also to subsequently sell the Summit to Singularity University (SU). We are confident that the Summit is in good hands, and we are also pleased that SU’s acquisition of the Singularity Summit provides us with some much-needed funding expand our research program.

That said, most of the money from the Summit acquisition is being dedicated to a special fund for Friendly AI researchers, and does not support our daily operations. For that, we need your help! Please contribute to our ongoing matching challenge, which ends January 20th!

Onward and upward,


Read more »

December 2012 Newsletter

 |   |  Newsletters

Greetings from the Executive Director

Dear friends of the Singularity Institute,

This month marks the biggest shift in our operations since the Singularity Summit was founded in 2006. Now that Singularity University has acquired the Singularity Summit (details below), and SI’s interests in rationality training are being developed by the now-separate Center for Applied Rationality, the Singularity Institute is making a major transition. For 12 years we’ve largely focused on movement-building — through the Singularity Summit, Less Wrong, and other programs. This work was needed to build up a community of support for our mission and a pool of potential researchers for our unique interdisciplinary work.

Now, the time has come to say “Mission Accomplished Well Enough to Pivot to Research.” Our community of supporters is now large enough that qualified researchers are available for us to hire, if we can afford to hire them. Having published 30+ research papers and dozens more original research articles on Less Wrong, we certainly haven’t neglected research. But in 2013 we plan to pivot so that a much larger share of the funds we raise is spent on research. If you’d like to help with that, please contribute to our ongoing fundraising drive.

Onward and upward,


Read more »