MIRI’s July Newsletter: Fundraiser and New Papers

 |   |  Newsletters



Greetings from the Executive Director

Dear friends,

Another busy month! Since our last newsletter, we’ve published 3 new papers and 2 new “analysis” blog posts, we’ve significantly improved our website (especially the Research page), we’ve relocated to downtown Berkeley, and we’ve launched our summer 2013 matching fundraiser!

MIRI also recently presented at the Effective Altruism Summit, a gathering of 60+ effective altruists in Oakland, CA. As philosopher Peter Singer explained in his TED talk, effective altruism “combines both the heart and the head.” The heart motivates us to be empathic and altruistic toward others, while the head can “make sure that what [we] do is effective and well-directed,” so that altruists can do not just some good but as much good as possible.

As I explain in Friendly AI Research as Effective Altruism, MIRI was founded in 2000 on the premise that creating Friendly AI might be a particularly efficient way to do as much good as possible. Effective altruists focus on a variety of other causes, too, such as poverty reduction. As I say in Four Focus Areas of Effective Altruism, I think it’s important for effective altruists to cooperate and collaborate, despite their differences of opinion about which focus areas are optimal. The world needs more effective altruists, of all kinds.

MIRI engages in direct efforts — e.g. Friendly AI research — to improve the odds that machine superintelligence has a positive rather than a negative impact. But indirect efforts — such as spreading rationality and effective altruism — are also likely to play a role, for they will influence the context in which powerful AIs are built. That’s part of why we created CFAR.

If you think this work is important, I hope you’ll donate now to support our work. MIRI is entirely supported by private funders like you. And if you donate before August 15th, your contribution will be matched by one of the generous backers of our current fundraising drive.

Thank you,

Luke Muehlhauser

Executive Director

Our Summer 2013 Matching Fundraiser

Thanks to the generosity of several major donors, every donation to MIRI made from now until August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!

Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

Early this year we made a transition from movement-building to research, and we’ve hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future’s most important problem.

Accomplishments in 2013 so far

Future Plans You Can Help Support

  • We will host many more research workshops, including one in September, and one in December (with John Baez attending, among others).
  • Eliezer will continue to publish about open problems in Friendly AI. (Here is #1 and #2.)
  • We will continue to publish strategic analyses, mostly via our blog.
  • We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: The Sequences, 2006-2009 and The Hanson-Yudkowsky AI Foom Debate.
  • We will continue to set up the infrastructure (e.g. new offices, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.

(Other projects are still being surveyed for likely cost and strategic impact.)

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

New Research Page, Three New Publications

Our new Research page has launched!

Our previous research page was a simple list of articles, but the new page describes the purpose of our research, explains four categories of research to which we contribute, and highlights the papers we think are most important to read.

We’ve also released three new research articles.

Tiling Agents for Self-Modifying AI, and the Löbian Obstacle (discuss it here), by Yudkowsky and Herreshoff, explains one of the key open problems in MIRI’s research agenda:

We model self-modification in AI by introducing “tiling” agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring’s goals). Constructing a formalism in the most straightforward way produces a Gödelian difficulty, the “Löbian obstacle.” By technical methods we demonstrates the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.

Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic (discuss it here), by LaVictoire et al., explains some progress in program equilibrium made by MIRI research associate Patrick LaVictoire and several others during MIRI’s April 2013 workshop:

Rational agents defect on the one-shot prisoner’s dilemma even though mutual cooperation would yield higher utility for both agents. Moshe Tennenholtz showed that if each program is allowed to pass its playing strategy to all other players, some programs can then cooperate on the one-shot prisoner’s dilemma. Program equilibria is Tennenholtz’s term for Nash equilibria in a context where programs can pass their playing strategies to the other players. One weakness of this approach so far has been that any two programs which make different choices cannot “recognize” each other for mutual cooperation, even if they are functionally identical. In this paper, provability logic is used to enable a more flexible and secure form of mutual cooperation.

Responses to Catastrophic AGI Risk: A Survey (discuss it here), by Sotala and Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.

Two New Analyses

MIRI publishes some of its most substantive research to its blog, under the Analysis category. For example, When Will AI Be Created? is the product of 20+ hours of work, and has 14 footnotes and 40+ scholarly references (all of them linked to PDFs).

Last month, we published two new analyses.

Friendly AI Research as Effective Altruism presents a bare-bones version of an argument that Friendly AI research is a particularly efficient way to purchase expected value, so that the argument can be elaborated and critiqued by MIRI and others.

What is Intelligence? argues that imprecise working definitions can be useful, an explains the particular imprecise working definition for intelligence that we tend to use at MIRI: efficient cross-domain optimization. A future post will discuss some potentially useful working definitions for “artificial general intelligence.”

Grant Writer Needed

MIRI would like to hire someone to write grant applications, both for our research efforts and for STEM education. If you have experience with either, please apply here.

The pay will depend on skill and experience, and is negotiable.

Featured Volunteer

Oliver Habryka helps out by proofreading MIRI’s paper, and would be able to contribute to our research at some point, perhaps on the subject of “lessons for ethics from machine ethics.” Independent of his direct contributions to MIRI’s work, Oliver has also lectured on topics related to MIRI’s work at his high school, and has also taught a class on rationality, where he inspired participation by using a “leveling up” reward system. Oliver is currently studying the foundations of mathematics and hopes one day to direct his career goals in such a way that his contributions to our mission increase over time.