Facing the Intelligence Explosion ebook

 |   |  News

Facing the Intelligence Explosion is now available as an ebook!

You can get it here. It is available as a “pay-what-you-want” package that includes the ebook in three formats: MOBI, EPUB, and PDF.

It is also available on Amazon Kindle (US, Canada, UK, and most others) and the Apple iBookstore (US, Canada, UK and most others).

All sources are DRM-free. Grab a copy, share it with your friends, and review it on Amazon or the iBookstore.

All proceeds go directly to funding the technical and strategic research of the Machine Intelligence Research Institute.

The Lean Nonprofit

 |   |  MIRI Strategy

Can Lean Startup methods work for nonprofits?

The Lean Startup‘s author Eric Ries seems to think so:

A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty… Anyone who is creating a new product or business under conditions of extreme uncertainty is an entrepreneur whether he or she knows it or not, and whether working in a government agency, a venture-backed company, a nonprofit, or a decidedly for-profit company with financial investors.

In the past year, I helped launch one new nonprofit (Center for Applied Rationality), I massively overhauled one older nonprofit (MIRI), and I consulted with many nonprofit CEOs and directors. Now I’d like to share some initial thoughts on the idea of a “Lean Nonprofit.”

Read more »

Early draft of naturalistic reflection paper

 |   |  Papers

Update: See Reflection in Probabilistic Logic for more details on how this result relates to MIRI’s research mission.

In a recent blog post we described one of the results of our 1st MIRI Workshop on Logic, Probability, and Reflection:

The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning.

In short, the result described is a “loophole” in Tarski’s undefinability theorem (1936).

An early draft of the paper describing this result is now available: download it here. Its authors are Paul Christiano (UC Berkeley), Eliezer Yudkowsky (MIRI), Marcello Herreshoff (Google), and Mihály Bárász (Google). An excerpt from the paper is included below:

Unfortunately, it is impossible for any expressive language to contain its own truth predicate True

There are a few standard responses to this challenge.

The first and most popular is to work with meta-languages…

A second approach is to accept that some sentences, such as the liar sentence G, are neither true nor false…

Although this construction successfully dodges the “undefinability of truth” it is somewhat unsatisfying. There is no predicate in these languages to test if a sentence… is undefined, and there is no bound on the number of sentences which remain undefined. In fact, if we are specifically concerned with self-reference, then a great number of properties of interest (and not just pathological counterexamples) become undefined.

In this paper we show that it is possible to perform a similar construction over probabilistic logic. Though a language cannot contain its own truth predicate True, it can nevertheless contain its own “subjective probability” function P. The assigned probabilities can be reflectively consistent in the sense of an appropriate analog of the reflection property 1. In practice, most meaningful assertions must already be treated probabilistically, and very little is lost by allowing some sentences to have probabilities intermediate between 0 and 1.

Another paper showing an application of this result to set theory is forthcoming.

March Newsletter

 |   |  Newsletters

 

newsletterheader_sm_c.1

Greetings From The Executive Director

Friends,

As previously announced on our blog, the Singularity Institute has been renamed as the Machine Intelligence Research Institute (MIRI). Naturally, both our staff and our supporters have positive associations with our original name, the “Singularity Institute.” As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past several weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general. University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location).

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

See our new website at Intelligence.org. The site guide here.

Our emails have changed, too. Be sure to update your email Contacts list with our new email addresses, e.g. luke@intelligence.org. Our previous email addresses at singinst.org and singularity.org no longer work. You can see all our new email addresses on the Team page.

Cheers,

Luke Muehlhauser

Executive Director

Read more »

Upcoming MIRI Research Workshops

 |   |  News

From November 11-18, 2012, we held (what we now call) the 1st MIRI Workshop on Logic, Probability, and Reflection. This workshop had four participants:

The participants worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).

These results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months.

In the meantime, MIRI is preparing for the 2nd MIRI Workshop on Logic, Probability, and Reflection, to take place from April 3-24, 2013. This workshop will be broken into two sections. The first section (Apr 3-11) will bring together the 1st workshop’s participants and 8 additional participants:

The second section (Apr 12-24) will consist solely of the 4 participants from the 1st workshop.

Participants of this 2nd workshop will continue to work on the foundations of reflective reasoning, for example Gödelian obstacles to reflection, and decision algorithms for reflective agents (e.g. TDT).

Additional MIRI research workshops are also tentatively planned for the summer and fall of 2013.

Update: An early draft of the paper describing the first result from the 1st workshop is now available here.

Welcome to Intelligence.org

 |   |  News

Welcome to the new home for the Machine Intelligence Research Institute (MIRI), formerly called “The Singularity Institute.”

The new design (from Katie Hartman, who also designed the new site for CFAR) reflects our recent shift in focus from “movement-building” to technical research. Our research and our research advisors are featured prominently on the home page, and our network of research associates are included on the Team page.

Getting involved is also clearer, with easy-to-find pages for applying to be a volunteer, an intern, a visiting fellow, or a research fellow.

Our About page hosts things like our transparency page, our top contributors list, our About page hosts things like our transparency page, our top contributors list, our new press kit, and our archive of all Singularity Summit talk videos, audio, and transcripts from 2006-2012. (The Summit was recently acquired by Singularity University.)

Follow our blog to keep up with the latest news and analyses. Recent analyses include Yudkowsky on Logical Uncertainty and Yudkowsky on “What Can We Do Now?”

We’ll be adding additional content in the next few months, so stay tuned!

We are now the “Machine Intelligence Research Institute” (MIRI)

 |   |  News

When Singularity University (SU) acquired the Singularity Summit from us in December, we also agreed to change the name of our institute to avoid brand confusion between the Singularity Institute and Singularity University. After much discussion and market research, we’ve chosen our new name. We are now the Machine Intelligence Research Institute (MIRI).

Naturally, both our staff members and supporters have positive associations with our original name, the “Singularity Institute for Artificial Intelligence,” or “Singularity Institute” for short. As such, any new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past few weeks, and we think it will grow on you, too.

Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general.

University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location). For example:

“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”

We’ll be operating from Singularity.org for a little while longer, but sometime before March 5th we’ll launch a new website, under the new name, at a new domain name: Intelligence.org. (We have thus far been unable to acquire some other fitting domain names, including miri.org.)

We’ll let you know when we’ve moved our email accounts to the new domain. All existing newsletter subscribers will continue to receive our newsletter after the name change.

Many thanks again to all our supporters who are sticking with us through this transition in branding (from SIAI to MIRI) and our transition in activities (a singular focus on research after passing our rationality work to CFAR and the Summit to SU). We hope you’ll come to like our new name as much as we do!

Machine Intelligence Research Institute

Yudkowsky on Logical Uncertainty

 |   |  Conversations

A paraphrased transcript of a conversation with Eliezer Yudkowsky.

Interviewer: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein tackled had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs.

Eliezer: See also How to convince me that 2 + 2 = 3.

Interviewer: Exactly. Even within a probabilistic system like a Bayes net, there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.”

Eliezer: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility.

Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0.

Interviewer: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right?

Eliezer: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number?

Interviewer: Do you see much of a relation between the two problems?

Eliezer: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms.

What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see.