Double Your Donations via Corporate Matching

 |   |  News

double-the-donation-logoMIRI has now partnered with Double the Donation, a company that makes it easier for donors to take advantage of donation matching programs offered by their employers.

More than 65% of Fortune 500 companies match employee donations, and 40% offer grants for volunteering, but many of these opportunities go unnoticed. Most employees don’t know these programs exist!

Go to MIRI’s Double The Donation page here to find out whether your employer can match your donations to MIRI. Or, use the form below:


How well will policy-makers handle AGI? (initial findings)

 |   |  Analysis

MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.”

One policy-relevant question is: How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?

To investigate these questions, we asked Jonah Sinick to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as with our project on how well we can plan for future decades. The post below is a summary of findings from our full email exchange (.pdf) so far.

As with our investigation of how well we can plan for future decades, we decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren’t yet able to draw any confident conclusions about our core questions.

The most significant results from this project so far are:

  1. We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.
  2. Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.
  3. The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.
  4. The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.
  5. The eradication of smallpox is only somewhat analogous to the invention of AGI.
  6. Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even “initial thoughts” can be given.
  7. We identified additional historical cases that could be investigated in the future.

Further details are given below. For sources and more, please see our full email exchange (.docx).

Read more »

MIRI’s September Newsletter

 |   |  Newsletters

 

 

Greetings from the Executive Director

Dear friends,

With your help, we finished our largest fundraiser ever, raising $400,000 for our research program. My thanks to everyone who contributed!

We continue to publish non-math research to our blog, including an ebook copy of The Hanson-Yudkowsky AI-Foom Debate (see below). In the meantime, earlier math results are currently being written up, and new results are being produced at our ongoing decision theory workshop.

This October, Eliezer Yudkowsky and Paul Christiano are giving talks about MIRI’s research at MIT and Harvard. Exact details are still being confirmed, so if you live near Boston then you may want to subscribe to our blog so that you can see the details as soon as they are announced (which will be long before the next newsletter).

This November, Yudkowsky and I are visiting Oxford to “sync up” with our frequent collaborators at the Future of Humanity Institute at Oxford University, and also to run our November research workshop (in Oxford).

And finally, let me share a bit of fun with you. Philosopher Robby Bensinger re-wrote Yudkowsky’s Five Theses using the xkcd-inspired Up-Goer Five Text Editor, which only allows use of the 1000 most common words in English. Enjoy.

Cheers,

Luke Muehlhauser

Executive Director

Read more »

Laurent Orseau on Artificial General Intelligence

 |   |  Conversations

Laurent Orseau is an associate professor (maître de conférences) since 2007 at AgroParisTech, Paris, France. In 2003, he graduated from a professional master in computer science at the National Institute of Applied Sciences in Rennes and from a research master in artificial intelligence at University of Rennes 1. He obtained his PhD in 2007. His goal is to build a practical theory of artificial general intelligence. With his co-author Mark Ring, they have been awarded the Solomonoff AGI Theory Prize at AGI’2011 and the Kurzweil Award for Best Idea at AGI’2012.

Luke Muehlhauser: In the past few years you’ve written some interesting papers, often in collaboration with Mark Ring, that use AIXI-like models to analyze some interesting features of different kinds of advanced theoretical agents. For example in Ring & Orseau (2011), you showed that some kinds of advanced agents will maximize their rewards by taking direct control of their input stimuli — kind of like the rats who “wirehead” when scientists give them direct control of the input stimuli to their reward circuitry (Olds & Milner 1954). At the same time, you showed that at least one kind of agent, the “knowledge-based” agent, does not wirehead. Could you try to give us an intuitive sense of why some agents would wirehead, while the knowledge-based agent would not?


Laurent Orseau: You’re starting with a very interesting question!

Read more »

Five Theses, Using Only Simple Words

 |   |  News

xkcd at deskA recent xkcd comic described the Saturn V rocket using only the 1000 most frequently used words (in English). The rocket was called “up-goer five,” and the liquid hydrogen feed line was the “thing that lets in cold wet air to burn.” This inspired a geneticist to make the Up-Goer Five Text Editor, which forces you to use only the 1000 most frequent words. Mental Floss recently collected 18 scientific ideas explained using this restriction.

What does this have to do with MIRI? Well, young philosopher Robby Bensinger has now re-written MIRI’s Five Theses using the Up-Goer Five text editor, with amusing results:

  • Intelligence explosion: If we make a computer that is good at doing hard things in lots of different situations without using much stuff up, it may be able to help us build better computers. Since computers are faster than humans, pretty soon the computer would probably be doing most of the work of making new and better computers. We would have a hard time controlling or understanding what was happening as the new computers got faster and grew more and more parts. By the time these computers ran out of ways to quickly and easily make better computers, the best computers would have already become much much better than humans at controlling what happens.
  • Orthogonality: Different computers, and different minds as a whole, can want very different things. They can want things that are very good for humans, or very bad, or anything in between. We can be pretty sure that strong computers won’t think like humans, and most possible computers won’t try to change the world in the way a human would.
  • Convergent instrumental goals: Although most possible minds want different things, they need a lot of the same things to get what they want. A computer and a human might want things that in the long run have nothing to do with each other, but have to fight for the same share of stuff first to get those different things.
  • Complexity of value: It would take a huge number of parts, all put together in just the right way, to build a computer that does all the things humans want it to (and none of the things humans don’t want it to).
  • Fragility of value: If we get a few of those parts a little bit wrong, the computer will probably make only bad things happen from then on. We need almost everything we want to happen, or we won’t have any fun.

That is all. You’re welcome.

How effectively can we plan for future decades? (initial findings)

 |   |  Analysis

MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least a few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?

Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?

To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.pdf) so far.

We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren’t yet able to draw any confident conclusions about our core questions.

The most significant results from this project so far are:

  1. Jonah’s initial impressions about The Limits to Growth (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had  seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
  2. Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
  3. In retrospect, Norbert Weiner’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
  4. Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our  core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
  5. Upon initial investigation, two historical cases seemed like they might shed light on our  core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
  6. We listed many other historical cases that may be worth investigating.

The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s The Signal and the Noise, available here.

Further details are given below. For sources and more, please see our full email exchange (.pdf).

Read more »

The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!

 |   |  News

ai-foom-coverIn late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.

The debate is now available as an eBook in various popular formats (PDF, EPUB, and MOBI). It includes:

  • the original series of blog posts,
  • a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject,
  • a summary of the debate written by Kaj Sotala, and
  • a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.

Comments from the authors are included at the end of each chapter, along with a link to the original post.

Head over to intelligence.org/ai-foom-debate/ to download a free copy.

Stephen Hsu on Cognitive Genomics

 |   |  Conversations

Steve Hsu portraitStephen Hsu is Vice-President for Research and Graduate Studies and Professor of Theoretical Physics at Michigan State University. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon. He was also founder of SafeWeb, an information security startup acquired by Symantec. Hsu is a scientific advisor to BGI and a member of its Cognitive Genomics Lab.

Luke Muehlhauser: I’d like to start by familiarizing our readers with some of the basic facts relevant to the genetic architecture of cognitive ability, which I’ve drawn from the first half of a presentation you gave in February 2013:

Read more »