Machine Intelligence Research Institute Progress Report, May 2012

 |   |  News

Past progress reports: April 2012, March 2012, February 2012January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in May 2012:

  • How to Purchase AI Risk Reduction: Luke wrote a series of posts on how to purchase AI risk reduction, with cost estimates for many specific projects. Some projects are currently in place at SI; others can be launched if we are able to raise sufficient funding.
  • Research articles: Luke continued to work with about a dozen collaborators on several developing research articles, including “Responses to Catastrophic AGI Risk,” mentioned here.
  • Other writings: Kaj Sotala, with help from Luke and many others, published How to Run a Successful Less Wrong Meetup Group. Carl published several articles: (1) Utilitarianism, contractualism, and self-sacrifice, (2) Philosophers vs. economists on discounting, (3) Economic growth: more costly disasters, better prevention, and (4) What to eat during impact winter? Eliezer wrote Avoid Motivated Cognition. Luke posted part 2 of his dialogue with Ben Goertzel about AGI.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Louie and SI’s new executive assistant Ioven Fables are hard at work on organizational development and transparency (some of which will be apparent when the new website launches).
  • Center for Applied Rationality (CFAR): The CFAR team continued to make progress toward spinning off this rationality-centric organization, in keeping with SI’s strategic plan. We also held the first summer minicamp, which surpassed our expectations and was very positively received. (More details on this will be compiled later.)
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in May 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, and Casey Pfluger. Thanks everyone! (And, our apologies if we forgot to name you!)