Past progress reports: February 2012, January 2012, December 2011.
Fun fact of the day: The Machine Intelligence Research Institute’s research fellows and research associates have more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined.
Here’s what the Machine Intelligence Research Institute did in March 2012:
- Research articles: Luke and Anna released an updated draft of Intelligence Explosion: Evidence and Import, and Luke and Louie released an updated draft of The Singularity and Machine Ethics. Luke submitted an article (co-authored with Nick Bostrom) to Communications of the ACM — an article on Friendly AI. Machine Intelligence Research Institute research associate Joshua Fox released two forthcoming articles co-authored with (past Machine Intelligence Research Institute Visiting Fellow) Roman Yampolskiy: Safety Engineering for Artificial General Intelligence and Artificial General Intelligence and the Human Mental Model.
- Other articles: Luke published The AI Problem, with Solutions, How to Fix Science, Muehlhauser-Goertzel Dialogue Part 1, a list of journals that may publish articles on AI risk, and the first three posts in his series AI Risk and Opportunity: A Strategic Analysis. The Machine Intelligence Research Institute paid (past Visiting Fellow) Kaj Sotala to write most of a new instructional booklet for Less Wrong meetup group organizers, which should be published in the next month or two. Eliezer continued work on his new Bayes’ Theorem tutorial and other writing projects. Carl published Using degrees of freedom to change the past for fun and profit and Are pain and pleasure equally energy efficient?
- Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new Summit website, new annual report, and new newsletter design. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several more volunteer-prepared translations of Facing the Singularity. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.
- Rationality Group: Per our strategic plan, we will launch this new “Rationality Group” organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In March, Rationality Group (led by Anna) contracted with Julia Galef and Michael Smith to work toward launching the organization. Eliezer continued to help Rationality Group develop and test its lessons. Rationality Group has begun offering prizes for suggesting exercises for developing rationality skills, starting with the skills of “Be Specific” and “Check Consequentialism.” Rationality Group has also announced three Minicamps on Rationality and Awesomeness, for May 11-13, June 22-24, and July 21-28. Apply now.
- Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. This included a two-week visit by Nick Beckstead, who worked with us on AI risk reduction strategy.
- And of course much more than is listed here!
Finally, we’d like to recognize our most active volunteers in March 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)