August 2012 Newsletter

 |   |  Newsletters

This newsletter was sent to newsletter subscribers in early August, 2012.

Greetings from the Executive Director

The big news this month is that we surpassed our fundraising goal of raising $300,000 in the month of July. My thanks to everyone who donated! Your contributions will help us finish launching CFAR and begin to build a larger and more productive research team working on some of the most important research problems in the world. Luke Muehlhauser

Singularity Summit Prices Will Increase on August 15th!

Two-day tickets prices for the Singularity Summit 2012 are still only $635, but will increase again on August 15th! For anyone interested in hearing some of the foremost speakers on science, technology, and the future of humanity, buy your ticket today to our international conference at the Nob Hill Masonic Center, SF on October 13-14th!

2012 Summer Singularity Challenge Success!

Thanks to the effort of our donors, the 2012 Summer Singularity Challenge has been met! All $150,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $300,000 to fund the Machine Intelligence Research Institute’s operations. We reached our goal near 6pm on July 29th. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference. Here’s to a better future for the human species.

Facing the Singularity Finished

Luke Muehlhauser has now published the final chapters of his introductory blog on the coming of AI, Facing the Singularity. The penultimate chapter explains what can be done to improve our odds of a positive singularity, and the final chapter outlines what benefits we can expect from a positive singularity.

Comparison of 2011 August strategic plan to today

Progress updates are nice, but without a previously defined metric for success it’s hard to know whether an organization’s achievements are noteworthy or not. Is the Machine Intelligence Research Institute making good progress, or underwhelming progress? Luckily, in August 2011 we published a strategic plan that outlined lots of specific goals. It’s now August 2012, so we can check our progress against the standard set nearly one year ago. The full comparison is available here, and the final section is excerpted below: Now let’s check in on what we said our top priorities for 2011-2012 were:

  1. Public-facing research on creating a positive singularity. Check. SI has more peer-reviewed publications in 2012 than in all past years combined.
  2. Outreach / education / fundraising. Check. Especially, through CFAR.
  3. Improved organizational effectiveness. Check. Lots of good progress on this.
  4. Singularity Summit. Check.

In summary, I think SI is a bit behind where I hoped we’d be by now, though this is largely because we’ve poured so much into launching CFAR, and as a result, CFAR has turned out to be significantly more cool at launch than I had anticipated.

SI Publishes Solomonoff Induction Tutorial

Visiting Fellow Alex Altair worked with Luke Muehlhauser to publish An Intuitive Explanation of Solomonoff Induction, a sequel to Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem. Whereas Bayes’ Theorem is a key idea in probability theory, Solomonoff Induction is a key idea in the study of universal, automated inference.It begins:

People disagree about things. Some say that television makes you dumber; other say it makes you smarter.  Some scientists believe life must exist elsewhere in the universe; others believe it must not. Some say that complicated financial derivatives are essential to a modern competitive economy; others think a nation’s economy will do better without them.  It’s hard to know what is true.

And it’s hard to know how to figure out what is true.  Some argue that you should assume the things you are most certain about and then deduce all other beliefs from your original beliefs. Others think you should accept at face value the most intuitive explanations of personal experience. Still others think you should generally agree with the scientific consensus until it is disproved.

Wouldn’t it be nice if determining what is true was like baking a cake? What if there was a recipe for finding out what is true? All you’d have to do is follow the written directions exactly, and after the last instruction you’d inevitably find yourself with some sweet, tasty truth!

In this tutorial, we’ll explain the closest thing we’ve found so far to a recipe for finding truth: Solomonoff induction.

Dialogue with Bill Hibbard about AGI

Luke Muehlhauser has published a dialogue between himself and computer scientist Bill Hibbard, author of Super-Intelligent Machines, about AI safety. The dialogue is part of Luke’s series of interviews about AI safety.

Featured Donor: Robin Powell

Below is an interview with this month’s featured donor, Robin Powell.Luke Muehlhauser: Robin, you’ve been donating $200 a month since August 2004. That adds up to more than $20,000, making you our 8th largest publicly listed donor! Why do you support the Machine Intelligence Research Institute like this?Robin Powell: I honestly believe that a beneficial Singularity is the best hope that humanity has for long-term survival. Having spent hundreds of hours researching the various people and groups that are actively working on Singularity-related issues, the Machine Intelligence Research Institute is the only one that I really feel has their eyes on the right ball, which is the Friendly AI problem. I feel confident that my donations are the most effective way I can possibly aid in the best possible future for humanity.

Luke: What do you give up each month in order to donate $200/month to the Machine Intelligence Research Institute?

Robin: Mostly I’ve been able to get by when things got complicated by re-budgeting, but I’ve had to do that rather a lot more often than I would have had to otherwise.

Luke: What challenges have you faced since August 2004, while continuing to donate $200 a month?

Robin: The time that I took off a couple of months to help my aging father, without pay, was by far the hardest; the extra money would really have helped then. But for me it’s about expected return: when the future of the human race is in the balance, having to borrow from friends briefly or similar hardships seem pretty inconsequential.

Luke: What one thought would you most like to share with the community of people who care about reducing existential risks?

Robin: AI is coming, relatively soon. There is no more important task for humanity than to prevent our extinction and preserve a better version of our values. Now is the time to spend time and money protecting the future of humanity. Please help us.

Luke: Thanks for your time, Robin, and thanks for your continued support!

Featured Summit Video

This month we are featuring a video from the 2006 Singularity Summit: Eliezer Yudkowsky’s “The Human Importance of the Intelligence Explosion“. Eliezer’s talk discusses I.J. Good’s concept of an “intelligence explosion,” and its central importance for the human species.

Use Good Search, support the Machine Intelligence Research Institute

GoodSearch, which allows you to donate to a cause merely by using their search engine, now has a donation option for the Machine Intelligence Research Institute. Use GoodSearch to donate every day without opening your wallet!