Once again, a reporter thinks our positions are the opposite of what they are

 |   |  Analysis

Perhaps the most accurate media coverage the Machine Intelligence Research Institute (MIRI) has yet received was a piece by legendary science author Carl Zimmer in Playboy. To give you a sense of how inaccurate most of our media coverage is, here’s a (translated) quote from some coverage of MIRI in a Franco-German documentary:

In San Francisco however, a society of young voluntary scientists believes in the good in robots. How naive! Here at [MIRI]…

Such a quote is amusing because, of course, the Machine Intelligence Research Institute has been saying for a decade that AI will by default be harmful to humanity, that it is extremely difficult to design a “moral machine,” that neither we nor anyone else knows how to do it yet, and that dozens of approaches proposed by others are naive and insufficient for one reason or another.

Now, in a new piece for The Sunday Times, Bryan Appleyard writes:

Yudkowsky [from MIRI] seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.

“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.

Again: MIRI has been saying for a decade that it is extremely difficult to program a machine to not kill a baby. Indeed, our position is that directly programming moral norms won’t work because our values are more complex than we realize. The direct programming of moral norms is something that others have proposed, and a position we criticize. For example, here is a quote from the concise summary of our research program:

Since we have no introspective access to the details of human values, the solution to this problem probably involves designing an AI to learn human values by looking at humans, asking questions, scanning human brains, etc., rather than an AI preprogrammed with a fixed set of imperatives that sounded like good ideas at the time.

But this doesn’t mean we can simply let some advanced machine learning algorithms observe human behavior to learn our moral values, because:

The explicit moral values of human civilization have changed over time, and we regard this change as progress, and extrapolate that progress may continue in the future. An AI programmed with the explicit values of 1800 might now be fighting to reestablish slavery… Possible bootstrapping algorithms include “do what we would have told you to do if we knew everything you knew,” “do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument,” and “do what we would tell you to do if we had your ability to reflect on and modify ourselves.” In moral philosophy, this notion of moral progress is known as reflective equilibrium.

Moving on… Appleyard’s point that “Machines would be much more radically adjusted away from human social norms, however we programmed them” is another point MIRI been making from the very beginning. See the warnings against anthropomorphism in “Creating Friendly AI” (2001) and “Artificial Intelligence as a Positive and Negative Factor in Global Risk” (written in 2005, published in 2008). AI mind designs will be far more “alien” to us than the minds of aliens appearing in movies.

Appleyard goes on to say:

[Compared to MIRI,] the Cambridge group has a much more sophisticated grasp of these issues. Price, in particular, is aware that machines will not be subject to social pressure to behave well.

“When you think of the forms intelligence might take,” Price says, “it seems reasonable to think we occupy some tiny corner of that space and there are many ways in which something might be intelligent in ways that are nothing like our minds at all.”

What Appleyard doesn’t seem to realize, here, is that Price is basically quoting a point long stressed by MIRI researcher Eliezer Yudkowsky — that there is a huge space of possible minds, and humans only occupy a tiny corner of that space. In fact, that point is associated with MIRI and Eliezer Yudkowsky more than with anyone else! Here’s another quote from the paper Yudkowsky wrote in 2005:

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds in general. The entire map floats in a still vaster space, the space of optimization processes… It is this enormous space of possibilities which outlaws anthropomorphism as legitimate reasoning.

So once again, a reporter thinks MIRI’s positions are the opposite of what they are.

Beware what you read in the popular media!

Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.

November 2012 Newsletter

 |   |  Newsletters

Greetings from the Executive Director

Dear friends of the Machine Intelligence Research Institute,

My thanks to the dozens of staff members, contractors, and volunteers who helped make this year’s Singularity Summit our most professional and exciting Summit yet! Videos of the talks are now online, but I pity those who missed out on the live event and the killer lobby scene. We made more room in the schedule this year for mingling and networking, and everyone seemed to love it. After all, the future won’t be created merely by information and information technologies, but by the communities of people who decide to create the future together.

The Summit is a tremendous amount of work each year, and so it felt great to have so many people approach me to say, unprompted, “Wow, this is the best Summit yet!” and “You guys really took it to the next level this year; this is great!” I replayed those moments in my head on Sunday night as I drifted into the blissful coma that would repay several weeks of sleep debt.

Luke Muehlhauser

Read more »

September 2012 Newsletter

 |   |  Newsletters

Greetings from the Executive Director

August was a busy month for the Machine Intelligence Research Institute. Thanks to our successful summer fundraiser, we are running full steam ahead on all fronts: Singularity Summit 2012, the launch of CFAR, increased research output (see below), and improving organizational efficiency in literally dozens of ways.

Thank you for your continued support as we work toward a positive Singularity.

Luke Muehlhauser

Read more »

August 2012 Newsletter

 |   |  Newsletters

This newsletter was sent to newsletter subscribers in early August, 2012.

Greetings from the Executive Director

The big news this month is that we surpassed our fundraising goal of raising $300,000 in the month of July. My thanks to everyone who donated! Your contributions will help us finish launching CFAR and begin to build a larger and more productive research team working on some of the most important research problems in the world. Luke Muehlhauser

Read more »

July 2012 Newsletter

 |   |  Newsletters

This newsletter was sent out to Machine Intelligence Research Institute newsletter subscribers in July 2012

Greetings from the Executive Director

Luke Muehlhauser

Friends of the Machine Intelligence Research Institute,

Greetings! Our new monthly newsletter will bring you the latest updates from the Machine Intelligence Research Institute (MIRI). (You can read earlier monthly progress updates here.)

These are exciting times at MIRI. We just launched our new website, and also the website for the Center for Applied Rationality. We have several research papers under development, and after a long hiatus from AI research, researcher Eliezer Yudkowsky is planning a new sequence of articles on “Open Problems in Friendly AI.”

We have also secured $150,000 in matching funds for a new fundraising drive. To help support us in our work toward a positive Singularity, please donate today and have your gift doubled!

Luke Muehlhauser
Machine Intelligence Research Institute Executive Director

Read more »

2012 Summer Singularity Challenge Success!

 |   |  News

Thanks to the effort of our donors, the 2012 Summer Singularity Challenge has been met! All $150,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $300,000 to fund the Machine Intelligence Research Institute’€™s operations. We reached our goal near 6pm on July 29th.

On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference.

Here’€™s to a better future for the human species.

2012 Summer Singularity Challenge

 |   |  News

Thanks to the generosity of several major donors, every donation to the Machine Intelligence Research Institute made now until July 31, 2012 will be matched dollar-for-dollar, up to a total of $150,000!

Donate Now!






Now is your chance to double your impact while helping us raise up to $300,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!

Note: If you prefer to support rationality training, you are welcome to earmark your donations for “CFAR” (Center for Applied Rationality). Donations earmarked for CFAR will only be used for CFAR, and donations not earmarked for CFAR will only be used for Singularity research and outreach.

Since we published our strategic plan in August 2011, we have achieved most of the near-term goals outlined therein. Here are just a few examples:

In the coming year, the Machine Intelligence Research Institute plans to do the following:

  • Hold our annual Singularity Summit, this year in San Francisco!
  • Spin off the Center for Applied Rationality as a separate organization focused on rationality training, so that the Machine Intelligence Research Institute can be focused more exclusively on Singularity research and outreach.
  • Publish additional research on AI risk and Friendly AI.
  • Eliezer will write an “Open Problems in Friendly AI” sequence for Less Wrong. (For news on his rationality books, see here.)
  • Finish Facing the Singularity and publish ebook versions of Facing the Singularity and The Sequences, 2006-2009.
  • And much more! For details on what we might do with additional funding, see How to Purchase AI Risk Reduction.

If you’re planning to earmark your donation to CFAR (Center for Applied Rationality), here’s a preview of what CFAR plans to do in the next year:

  • Develop additional lessons teaching the most important and useful parts of rationality. CFAR has already developed and tested over 18 hours of lessons so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.
  • Run immersive rationality retreats to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don’t have the June data yet, but people were similarly enthusiastic about that one.)
  • Run SPARC, a camp on the advanced math of rationality for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest).
  • Collect longitudinal data on the effects of rationality training, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group.
  • Develop apps and games about rationality, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases.
  • And more!

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

† $150,000 of total matching funds has been provided by Jaan Tallinn, Tomer Kagan, Alexei Andreev, and Brandon Reinhart.

Machine Intelligence Research Institute Progress Report, May 2012

 |   |  News

Past progress reports: April 2012, March 2012, February 2012January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in May 2012:

  • How to Purchase AI Risk Reduction: Luke wrote a series of posts on how to purchase AI risk reduction, with cost estimates for many specific projects. Some projects are currently in place at SI; others can be launched if we are able to raise sufficient funding.
  • Research articles: Luke continued to work with about a dozen collaborators on several developing research articles, including “Responses to Catastrophic AGI Risk,” mentioned here.
  • Other writings: Kaj Sotala, with help from Luke and many others, published How to Run a Successful Less Wrong Meetup Group. Carl published several articles: (1) Utilitarianism, contractualism, and self-sacrifice, (2) Philosophers vs. economists on discounting, (3) Economic growth: more costly disasters, better prevention, and (4) What to eat during impact winter? Eliezer wrote Avoid Motivated Cognition. Luke posted part 2 of his dialogue with Ben Goertzel about AGI.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Louie and SI’s new executive assistant Ioven Fables are hard at work on organizational development and transparency (some of which will be apparent when the new website launches).
  • Center for Applied Rationality (CFAR): The CFAR team continued to make progress toward spinning off this rationality-centric organization, in keeping with SI’s strategic plan. We also held the first summer minicamp, which surpassed our expectations and was very positively received. (More details on this will be compiled later.)
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in May 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, and Casey Pfluger. Thanks everyone! (And, our apologies if we forgot to name you!)