Machine Intelligence Research Institute Progress Report, February 2012

 |   |  News

Past progress reports: January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in February 2012:

  • Winter fundraiser completed: Thanks to the generous contributions of our supporters, our latest winter fundraiser was a success, raising much more than our target of $100,000!
  • Research articles: Luke and Anna published the Singularity Summit 2011 Workshop Report and released a draft of their article Intelligence Explosion: Evidence and Import, forthcoming in Springer’s The Singularity Hypothesis. Luke also worked on an article forthcoming in Communications of the ACM.
  • Other articles: Luke published a continuously updated list of Forthcoming and desired articles on AI risk. For Less Wrong, Carl published Feed the Spinoff Heuristic, and Luke published My Algorithm for Beating ProcrastinationA brief tutorial on preferences in AI, and Get Curious. Carl also published 4 articles on ethical careers for the 80,000 Hours blog (later posts will discuss optimal philanthropy and existential risks): How hard is it to become the Prime Minister of the United Kingdom?Entrepreneurship: a game of poker, not rouletteSoftware engineering: Britain vs. Silicon Valley, and 5 ways to be misled by salary rankings.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new website, and uploaded all past Singularity Summit videos to YouTube. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several volunteer-prepared translations of Facing the Singularity, and also a podcast for this online mini-book.
  • Grant awarded: The Machine Intelligence Research Institute awarded philosopher Rachael Briggs a $20,000 grant to write a paper on Eliezer Yudkowsky’s timeless decision theory. Two of Rachael’s papers — Distorted Reflection and Decision-Theoretic Paradoxes as Voting Paradoxes — have previously been selected as among the 10 best philosophy papers of the year by The Philosopher’s Annual.
  • Rationality Group: Anna and Eliezer continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our strategic plan, we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In February our Rationality Group team worked on curriculum development with several potential long-term hires, developed several rationality lessons which they tested (weekly) on small groups and iterated in response to feedback, spoke to advisors about how to build the organization and gather fundraising, and much more. The team also produced one example rationality lesson on sunk costs, including a presentation and exercise booklets. Note that Rationality Group is currently hiring curriculum developers, a remote executive assistant, and others, so apply here if you’re interested!
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities. Carl spent two weeks in Oxford visiting the Future of Humanity Institute and working with the researchers there.
  • Outsourcing: On Louie’s (sound) advice, the Machine Intelligence Research Institute is undergoing a labor transition such that most of the work we do (in hours) will eventually be performed not by our core staff but by (mostly remote) hourly contractors and volunteers, for example remote researchers, remote LaTeX workers, remote editors, and remote assistants. This shift provides numerous benefits, including (1) involving the broader community more directly in our work, (2) providing jobs for aspiring rationalists, and (3) freeing up our core staff to do the things that, due to accumulated rare expertise, only they can do.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in February 2012: Brian Rabkin, Cameron Taylor, Mitchell Owen, Gerard McCusker, Alex Richard, Andrew Homan, Vincent Vu, Gabriel Sztorc, Paul Gentemann, John Maxwell, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)

Comments are closed.