MIRI’s September Newsletter

 |   |  Newsletters

 

 

Greetings from the Executive Director

Dear friends,

With your help, we finished our largest fundraiser ever, raising $400,000 for our research program. My thanks to everyone who contributed!

We continue to publish non-math research to our blog, including an ebook copy of The Hanson-Yudkowsky AI-Foom Debate (see below). In the meantime, earlier math results are currently being written up, and new results are being produced at our ongoing decision theory workshop.

This October, Eliezer Yudkowsky and Paul Christiano are giving talks about MIRI’s research at MIT and Harvard. Exact details are still being confirmed, so if you live near Boston then you may want to subscribe to our blog so that you can see the details as soon as they are announced (which will be long before the next newsletter).

This November, Yudkowsky and I are visiting Oxford to “sync up” with our frequent collaborators at the Future of Humanity Institute at Oxford University, and also to run our November research workshop (in Oxford).

And finally, let me share a bit of fun with you. Philosopher Robby Bensinger re-wrote Yudkowsky’s Five Theses using the xkcd-inspired Up-Goer Five Text Editor, which only allows use of the 1000 most common words in English. Enjoy.

Cheers,

Luke Muehlhauser

Executive Director

The Hanson-Yudkowsky AI-Foom Debate is now available as an eBook!

In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”). James Miller and Carl Shulman also contributed guest posts to the debate.

The original debate took place in a long series of blog posts, which are collected here. This book also includes a transcript of a 2011 in-person debate between Hanson and Yudkowsky on this subject, a summary of the debate written by Kaj Sotala, and a 2013 technical report on AI takeoff dynamics (“intelligence explosion microeconomics”) written by Yudkowsky.

Comments from the authors are included at the end of each chapter, along with a link to the original post. The curious reader is encouraged to use these links to view the original posts and all comments. This book contains minor updates, corrections, and additional citations.

The debate is completely free for download in various eBook formats. See here.

New Analyses And Interviews

As with some other independent research organizations (e.g. GiveWell), much of MIRI’s research is published directly to our blog.

Since our last newsletter, we’ve published the following expert interviews:

Holden Karnofsky on Transparent Research Analyses:

We’re certainly developing new methods of analysis and evaluation [for GiveWell Labs]. Our working framework for shallow investigations replaces “proven, cost-effective, scalable charities” with “important, tractable, non-crowded causes” in terms of what we’re looking for. Much of our work so far has been more qualitative in nature, aiming to clarify and understand the basic landscape of causes rather than assess the extent to which approaches are “proven.”

Stephen Hsu on Cognitive Genomics:

Recently the results of a massive GWAS for genes associated with educational attainment were published in Science. Some of the researchers in this large collaboration are reluctant to openly state that the hits are associated with cognitive ability (as opposed to, say, Conscientiousness, which would also positively impact educational success). But if you read the paper carefully you can see that there is good evidence that the alleles are actually associated with cognitive ability (g or IQ).

Laurent Orseau on Artificial General Intelligence:

The traditional [agent AI] framework is dualist in the sense that it considers that the “mind” of the agent (the process with which the agent chooses its actions) lies outside of the environment. But we all know that if we ever program an intelligent agent on a computer, this program and process will not be outside of the world, they will be a part of it and, even more importantly, computed by it. This led us to define our space-time embedded intelligence framework and equation.

We also published two new analyses.

Transparency in Safety-Critical Systems:

Black box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI’s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.

How effectively can we plan for future decades? (initial findings)

How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades? To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing)… [and] we decided to publish our initial findings after investigating only a few historical cases.

An Appreciation Of Carl Shulman

Carl Shulman This month, Carl Shulman is leaving MIRI to study computer science full-time.

During his time with MIRI, Carl authored or co-authored several articles related to x-risk strategy and intelligence amplification, some of which are still forthcoming. He also contributed extensively to dozens of publications on which he is not a co-author, and contributed to the research program at the Center for Effective Altruism. He continues to blog at Reflective Disequilibrium, and remains a Research Associate of the Future of Humanity Institute at Oxford University.

I’ve enjoyed working with Carl during the past couple years, and he remains a valuable advisor with a remarkably broad and deep grasp of the complex, interlocking issues that face humanity as we navigate the 21st century.

Carl: Thanks so much for your work with MIRI! I wish you the best of luck on your future adventures.

Featured Volunteer: Francisco Garcia

Francisco Garcia helps out by translating MIRI’s articles into Spanish. His area of interest is robotics and autonomous agents, and he hopes to intern with MIRI in the future. Francisco is an MS/PhD student at the University of Massachusetts-Amherst. He recently joined the Resource Bounded Reasoning Lab (RBR) under the direction of Dr. Shlomo Zilberstein.

Francisco found out about MIRI about 2 years ago through a friend. He started volunteering to get more familiar with some of the modern ideas about AI and learn about the leading researchers, believing that MIRI’s work will play a crucial role in the not so distant future.

Francisco wants to build a career as a researcher. He is fascinated by planetary exploration and learning robots, and dreams of working in a place where he can research AI methods and apply them to the real world, like Boston Dynamics or NASA.  He believes that technology will become even more ubiquitous, AI techniques greatly improving and leading to a new understanding of the world, making humanity, as a whole, increasingly more efficient thanks to technology.