2014 Summer Matching Challenge!

 |   |  News

Nate & Nisan

Thanks to the generosity of several major donors, every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!







Total Donors


Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

Corporate matching and monthly giving pledges will count towards the total! Please email malo@intelligence.org if you intend on leveraging corporate matching (check here, to see if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.

(If you’re unfamiliar with our mission, see: Why MIRI?)


Accomplishments Since Our Winter 2013 Fundraiser Launched:

Ongoing Activities You Can Help Support

  • We’re writing an overview of the Friendly AI technical agenda (as we see it) so far.
  • We’re currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).
  • We’re writing several more papers and reports.
  • We’re growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.
  • We’re planning, or helping to plan, multiple research workshops, including the May 2015 decision theory workshop at Cambridge University.
  • We’re finishing the editing for a book version of Eliezer’s Sequences.
  • We’re helping to fund further SPARC activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.
  • We’re continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.
  • We’re helping Nick Bostrom promote his Superintelligence book in the U.S.
  • We’re investigating opportunities for supporting Friendly AI research via federal funding sources such as the NSF.

Other projects are still being surveyed for likely cost and impact. See also our mid-2014 strategic plan.

We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward. If you have questions about donating, please contact Malo Bourgon at (510) 292-8776 or malo@intelligence.org.

 $200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.

An appreciation of Louie Helm

 |   |  News

Louie Helm has left MIRI to pursue another opportunity. Louie remains a valued MIRI advisor, and we wish him the best in his new venture.

Louie played a pivotal role in MIRI’s recent transformation. Indeed, I most naturally think of the past 2.5 years as the “Luke & Louie era” in MIRI’s history. So I’d like to share with MIRI’s supporters some of what Louie contributed to MIRI’s recent transformation and growth.

Louie was a visiting fellow with SIAI (before it was called MIRI) in 2010, and then he returned to Asia but continued to serve as SIAI’s unpaid volunteer coordinator. Louie noticed my articles on Less Wrong and asked me in January 2011 to help him finish his Optimal Employment post. He then persuaded me to quit my job in Los Angeles and meet him in Berkeley to improve SIAI’s operations (as an intern).

Upon returning to Berkeley, Louie set up SIAI’s donor database, helped me write SIAI’s first strategic plan, led the effort for that summer’s fundraising drive, and worked with me on a long list of improvements to organizational efficiency. By the end of the year we had both been given executive roles at SIAI.

Later, Louie took the lead in SIAI’s branding transition to MIRI (e.g. domain names, website design, organization name market testing), and in finding and securing for MIRI a new office in downtown Berkeley. He has also networked for MIRI at dozens of events, helped organize and sell tickets for three Singularity Summits, won and managed MIRI’s Adwords grant (with Kevin Fisher), wrote our Recommended Courses page, created several new streams of revenue (affinity card, affiliate links, etc.), secured for MIRI several professional services and needed insurance contracts, and much more.

Louie’s accomplishments at MIRI are too numerous to list here. So, I’d like to conclude by thanking Louie for something perhaps less tangible but still very important: his business experience and advice. I did not have prior management experience when I was offered a leadership role at MIRI, and much of the credit for the last 2.5 years of organizational improvement and growth at MIRI must go to Louie’s business intuitions, and his willingness to help hone my own business intuitions. Fortunately, Louie’s advice will continue to inform MIRI’s trajectory even as he pursues other opportunities.

May 2015 decision theory workshop at Cambridge University

 |   |  News

Cambridge UniversityMIRI, CSER, and the philosophy department at Cambridge University are co-organizing a decision theory workshop titled Self-Prediction in Decision Theory and AI, to be held in the Faculty of Philosophy at the Cambridge University. The tentative dates are May 13-19, 2015.

Huw Price and Arif Ahmed at Cambridge University are the lead organizers.

Speakers confirmed so far include:

MIRI’s July 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research Updates

News Updates

  • We’ve released our mid-2014 strategic plan update.
  • There are currently six active MIRIx groups around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run your own independently-organized MIRIx workshop!
  • Luke and Eliezer will be giving talks at the Effective Altruism Summit.
  • We are actively hiring for four positions: research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Luke Muehlhauser
Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences


New report: “Non-omniscience, probabilistic inference, and metamathematics”

 |   |  News

Non-OmniscienceUC Berkeley student and MIRI research associate Paul Christiano has released a new report: “Non-omniscience, probabilistic inference, and metamathematics.”


We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.

Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.

What is the relation between this new report and Christiano et al.’s earlier “Definability of truth in probabilistic logic” report, discussed by John Baez here? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.

Roger Schell on long-term computer security research

 |   |  Conversations

Roger R. Schell portrait Roger R. Schell is a Professor of Engineering Practice at the University Of Southern California Viterbi School Of Engineering, and a member of the founding faculty for their Masters of Cyber Security degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of Aesec Corporation, a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “Class A1”) commercial product, the Gemini Multiprocessing Secure Operating System (GEMSOS). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the Trusted Computer System Evaluation Criteria (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the National Cyber Security Hall of Fame.

Read more »

New chapter in Cambridge Handbook of Artificial Intelligence

 |   |  News

cambridge handbook of AIThe Cambridge Handbook of Artificial Intelligence has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF here.

The abstract reads:

The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.

Our mid-2014 strategic plan

 |   |  MIRI Strategy


Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:

  • Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
  • Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
  • Increase our investment in our Friendly AI (FAI) technical research agenda.

The reasons for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:

  • We produced much Friendly AI research progress on many different fronts, and do not remotely feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.
  • FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.
  • We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).

In short, I think we tested and validated MIRI’s new strategic focus, and now it is time to scale. Thus, our top goals for the next 6-12 months are to:

  1. Produce more Friendly AI research.
  2. Recruit more Friendly AI researchers.
  3. Fundraise heavily to support those activities.

Read more »

As featured in:     Business Insider   Gizmodo   The Guardian   NPR   TIME