New Paper: “Why We Need Friendly AI”

 |   |  Papers

Think journal pageA new paper by Luke Muehlhauser and Nick Bostrom, “Why We Need Friendly AI,” has been published in the latest issue of the Cambridge journal Think: Philosophy for Everyone.

Abstract:

Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?

This paper is copyrighted by Cambridge University Press, and is reproduced on MIRI’s website with permission. The paper appears in Volume 13, Issue 36.

MIRI’s December 2013 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends,We’re still experimenting with our new, ultra-brief newsletter style. Please tell us what you think of it, by replying to this email. Thanks!

MIRI’s winter 2013 matching fundraiser is on! All donations made by January 15th will be matched by Peter Thiel, with 3x matching for “new large donors”: if you’ve given less than $5,000 total in the past, and now give or pledge $5,000 or more, your impact will be quadrupled! Reply to this email if you’d like to ask whether you qualify for 3x matching.

Here is a recent snapshot of our fundraising progress so far:


Other News Updates

Research Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

Scott Aaronson on Philosophical Progress

 |   |  Conversations

Scott Aaronson portraitScott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. Before that, he did a PhD in computer science at UC Berkeley, as well as postdocs at the Institute for Advanced Study, Princeton, and the University of Waterloo. His research focuses on the capabilities and limits of quantum computers, and more generally on the connections between computational complexity and physics. Aaronson is known for his blog as well as for founding the Complexity Zoo (an online encyclopedia of complexity classes); he’s also written about quantum computing for Scientific American and the New York Times. His first book, Quantum Computing Since Democritus, was published this year by Cambridge University Press. He’s received the Alan T. Waterman Award, the PECASE Award, and MIT’s Junior Bose Award for Excellence in Teaching.

Luke Muehlhauser: Though you’re best known for your work in theoretical computer science, you’ve also produced some pretty interesting philosophical work, e.g. in Quantum Computing Since Democritus, “Why Philosophers Should Care About Computational Complexity,” and “The Ghost in the Quantum Turing Machine.” You also taught a fall 2011 MIT class on Philosophy and Theoretical Computer Science.

Why are you so interested in philosophy? And what is the social value of philosophy, from your perspective?


Scott Aaronson: I’ve always been reflexively drawn to the biggest, most general questions that it seemed possible to ask. You know, like are we living in a computer simulation? if not, could we upload our consciousnesses into one? are there discrete “pixels” of spacetime? why does it seem impossible to change the past? could there be different laws of physics where 2+2 equaled 5? are there objective facts about morality? what does it mean to be rational? is there an explanation for why I’m alive right now, rather than some other time? What are explanations, anyway? In fact, what really perplexes me is when I meet a smart, inquisitive person—let’s say a mathematician or scientist—who claims NOT to be obsessed with these huge issues! I suspect many MIRI readers might feel drawn to such questions the same way I am, in which case there’s no need to belabor the point.

From my perspective, then, the best way to frame the question is not: “why be interested in philosophy?” Rather it’s: “why be interested in anything else?”

But I think the latter question has an excellent answer. A crucial thing humans learned, starting around Galileo’s time, is that even if you’re interested in the biggest questions, usually the only way to make progress on them is to pick off smaller subquestions: ideally, subquestions that you can attack using math, empirical observation, or both. For again and again, you find that the subquestions aren’t nearly as small as they originally looked! Much like with zooming in to the Mandelbrot set, each subquestion has its own twists and tendrils that could occupy you for a lifetime, and each one gives you a new perspective on the big questions. And best of all, you can actually answer a few of the subquestions, and be the first person to do so: you can permanently move the needle of human knowledge, even if only by a minuscule amount. As I once put it, progress in math and science — think of natural selection, Godel’s and Turing’s theorems, relativity and quantum mechanics — has repeatedly altered the terms of philosophical discussion, as philosophical discussion itself has rarely altered them! (Of course, this is completely leaving aside math and science’s “fringe benefit” of enabling our technological civilization, which is not chickenfeed either.)

Read more »

2013 Winter Matching Challenge

 |   |  News

Thanks to Peter Thiel, every donation made to MIRI between now and January 15th, 2014 will be matched dollar-for-dollar!

Also, gifts from “new large donors” will be matched 3x! That is, if you’ve given less than $5k to SIAI/MIRI ever, and you now give or pledge $5k or more, Thiel will donate $3 for every dollar you give or pledge.

We don’t know whether we’ll be able to offer the 3:1 matching ever again, so if you’re capable of giving $5k or more, we encourage you to take advantage of the opportunity while you can. Remember that:

  • If you prefer to give monthly, no problem! If you pledge 6 months of monthly donations, your full 6-month pledge will be the donation amount to be matched. So if you give monthly, you can get 3:1 matching for only $834/mo (or $417/mo if you get matching from your employer).
  • We accept Bitcoin (BTC) and Ripple (XRP), both of which have recently jumped in value. If the market value of your Bitcoin or Ripple is $5k or more on the day you make the donation, this will count for matching.
  • If your employer matches your donations at 1:1 (check here), then you can take advantage of Thiel’s 3:1 matching by giving as little as $2,500 (because it’s $5k after corporate matching).

Please email malo@intelligence.org if you intend on leveraging corporate matching or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.

Thiel’s total match is capped at $250,000. The total amount raised will depend on how many people take advantage of 3:1 matching. We don’t anticipate being able to hit the $250k cap without substantial use of 3:1 matching — so if you haven’t given $5k thus far, please consider giving/pledging $5k or more during this drive. (If you’d like to know the total amount of your past donations to MIRI, just ask malo@intelligence.org.)

$0

$62.5K

$125K

$187.5k

$250k

We have reached our matching total of $250,000!

Now is your chance to double or quadruple your impact in funding our research program.

 

Accomplishments Since Our July 2013 Fundraiser Launched:

How Will Marginal Funds Be Used?

  • Hiring Friendly AI researchers, identified through our workshops, as they become available for full-time work at MIRI.
  • Running more workshops (next one begins Dec. 14th), to make concrete Friendly AI research progress, to introduce new researchers to open problems in Friendly AI, and to identify candidates for MIRI to hire.
  • Describing more open problems in Friendly AI. Our current strategy is for Yudkowsky to explain them as quickly as possible via Facebook discussion, followed by more structured explanations written by others in collaboration with Yudkowsky.
  • Improving humanity’s strategic understanding of what to do about superintelligence. In the coming months this will include (1) additional interviews and analyses on our blog, (2) a reader’s guide for Nick Bostrom’s forthcoming Superintelligence book, and (3) an introductory ebook currently titled Smarter Than Us.

Other projects are still being surveyed for likely cost and impact.

We appreciate your support for our work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

New Paper: “Predicting AGI: What can we say when we know so little?”

 |   |  Papers

Predicting AGIMIRI research associate Benja Fallenstein and UC Berkeley student Alex Mennen have released a new working paper titled “Predicting AGI: What can we say when we know so little?

From the introduction:

This analysis does not attempt to predict when AGI will actually be achieved, but instead, to predict when this epistemic state with respect to AGI will change, such that we will have a clear idea of how much further progress is needed before we reach AGI. Metaphorically speaking, instead of predicting when AI takes off, we predict when it will start taxiing to the runway.

The paper argues for a Pareto distribution for “time to taxi,” and concludes:

in general, a Pareto distribution suggests that we should put a much greater emphasis on short-term strategies than a less skewed distribution (e.g. a normal distribution) with the same median would.

New Paper: “Racing to the Precipice”

 |   |  Papers

Racing to the edgeDuring his time as a MIRI research fellow, Carl Shulman contributed to a paper now available as an FHI technical report: Racing to the Precipice: a Model of Artificial Intelligence Development.

Abstract:

This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first — by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.

Update: As of July 2015, this paper has been published in the journal AI & Society.

MIRI’s November 2013 Newsletter

 |   |  Newsletters

 

Machine Intelligence Research Institute

Dear friends,
We’re experimenting with a new, ultra-brief newsletter style. To let us know what you think of it, simply reply to this email. Thanks!

News Updates

  • You can now support MIRI for free by shopping at smile.amazon.com instead of amazon.com. Update your bookmarks!
  • Louie Helm will be onsite for the Nov. 24th Marin county screening of Doug Wolens’ new documentary The Singularity. Details on this and other screenings are here.
Research Updates
Other Updates
  • Many of our friends have said Louie Helm’s Rockstar Research is among their favorite new sources of news; check it out!
  • Know an exceptionally bright, ambitious person younger than 20? Tell them to apply for The Thiel Fellowship! $100,000 to skip college and develop one’s skills and ideas, with an incredible network of mentors in the Bay Area.
  • CFAR has upcoming rationality workshops in February (Melbourne), March (Bay Area), and April (NYC). Tell your friends!
Cheers,
Luke Muehlhauser
Executive Director


Support MIRI by Shopping at AmazonSmile

 |   |  News

Support MIRI through AmazonSmile.If you shop at the new AmazonSmile, Amazon donates 0.5% of the price of your eligible purchases to a charitable organization of your choosing.

MIRI is an eligible charitable organization, so the next time you consider purchasing something through Amazon, support MIRI by shopping at AmazonSmile!

If you get to Amazon.com via an affiliate link, remember to change “amazon.com” to “smile.amazon.com” in the address bar before making your purchase. Or, even easier, use the SmileAlways Chrome extension.