“I think the Machine Intelligence Research Institute has some very smart people working on the most important mission on Earth, but… what exactly are they doing these days? I’m in the dark.”
There’s a good reason I hear this comment so often. We haven’t done a good job of communicating our progress to our supporters.
Since being appointed Executive Director of the Machine Intelligence Research Institute (SI) in November, I’ve been working to change that. I gave two Q&As about SI and explained our research program with a list of open problems in AI risk research. Now, I’d like to introduce our latest effort in transparency: monthly progress reports.
We begin with last month: December 2011. What did we do in December 2011?
(From this point on I’ll refer to myself as “Luke,” for clarity.)
- Winter fundraiser. We launched our winter fundraiser and have been contacting our supporters. The fundraiser has raised over $40k so far, though we still have $60k to go! (So, please donate!)
- Singularity Summit 2012. Our chief operating officer, Amy Willey, worked all month on preparations for Singularity Summit 2012, with much help from Luke. As a result we have now chosen a team of professionals with which we will take the Summit to “the next level,” and we’ve already confirmed several major speakers: Ray Kurzweil, Steven Pinker, Tyler Cowen, Temple Grandin, Peter Norvig, Robin Hanson, Peter Thiel, Melanie Mitchell, Vernor Vinge, and Carl Zimmer. We have also opened negotiations with many other speakers. This is a big improvement over our preparations for Singularity Summit 2011, which effectively began in May 2011, leaving us little time to capture certain speakers and develop certain kinds of media coverage. This much progress at such an early stage, in addition to a larger budget and greater professional assistance, will allow Singularity Summit 2012 to be a major leap forward for the event. Amy has also been developing arrangements for a possible European Singularity Summit in 2012.
- Rationality Org. As explained in our strategic plan, we recognize the branding confusion produced by focusing on both AI risk research and rationality education, so we are preparing to spin off a separate rationality education organization so that the Machine Intelligence Research Institute can focus on AI risk research. Internally, we are calling the rationality education organization “Rationality Org.” Anna and Eliezer, with some help from Luke, did a lot of work developing plans for the future Rationality Org. We spent even more time developing the core rationality lessons, testing versions of them on different groups of people, and iterating the content. We expect the Rationality Org to launch late this year or early next year, and we expect it to not only raise the sanity waterline but also bring significant funding toward existential risk reduction.
- New website design. Our media director, Michael Anissimov, with much help from Luke, worked out the strategy and design of SI’s new website and worked with a designer to iterate the design several times. The designer is now programming the site.
- New donor database. In December, our Director of Development, Louie Helm, finished setting up our new donor database, including the custom code for automatically importing data from Paypal, Google Checkout, etc. This database gives us a much better view of who our supporters are, and allows us to more effectively thank them for their support. Anissimov wrote personal thank-you notes to hundreds of past donors.
- Research articles. Luke and Anna made continued progress on their overview article “Intelligence Explosion: Evidence and Import.” Carl continued work with FHI‘s Stuart Armstrong on their article “Arms Races and Intelligence Explosions,” and continued work with Nick Bostrom on their article “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects.”
- Other articles. Luke wrote a few articles for Less Wrong: Hack Away at the Edges, Why study the cognitive science of concepts, and So You Want to Save the World. Eliezer made lots of progress on his new Bayes Theorem tutorial, including (outsourced) illustrations and much audience testing.
- Eliezer’s book. Eliezer finished the book proposal for his first book (already mostly written), The Science of Changing Your Mind. We have begun looking for good agents to represent the book.
- Facing the Singularity. Luke continued to develop his online book Facing the Singularity, a layman’s introduction to the Singularity, its consequences, and what we can do about it. The chapters he wrote in December 2011 were: The Crazy Robot’s Rebellion, Not Built to Think About AI, Playing Taboo with “Intelligence”, Superstition in Retreat, Plenty of Room Above Us, and Don’t Flinch Away.
- Additional transparency efforts. Anissimov and Luke began work on the design and content for an annual report. They also shot and produced Luke’s video Q&A #1.
- Optimal philanthropy. The optimal philanthropy movement (e.g. Giving What We Can) is growing exponentially. Carl and Anna did much collaboration and research with other members of the movement. Partly due to their work, the optimal philanthropy movement has great awareness of the case for existential risk reduction as optimal philanthropy, which should bring significant funding for existential risk reduction work in the coming years.
- Meetings with advisors, supporters, and potential researchers. During December 2011, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, along with other topics. We also met with several potential researchers to gauge their interest and abilities.
- Google Adwords upgrade. For months, Louie and others have been tweaking the ads we get from $10k/month of Google Adwords donated to us by Google. By December 2011, our ads were so successful that we qualified for an upgrade, and are now receiving $40k/month of free advertising via Google Adwords.
- Better financial management. In December 2011 we began to train our new treasurer, long-time donor and friend of SI, Jesse Liptrap. This means that someone outside the organization is keeping a close watch on our finances. We also began work on improving our book-keeping and accounting practices, which will allow better budgeting, forecasting, and resource management.
- Unpublished research. As with most research institutes, most of our research does not end up in a published paper for 1-3 years, if ever, even though it informs our views on many things. Unpublished research in December 2011 included research on population ethics, brain-computer interfaces, optimal philanthropy, technological forecasting, nuclear extinction risks, AI architectures, anthropics, decision theories, rationality training, Oracle AI, science productivity, and more. SI’s research associates contributed to some of this research, including the Less Wrong discussion post A model of UDT with a halting oracle.
- New board member. Quixey co-founder and CEO, Tomer Kagan, was added to SI’s board of directors. Tomer is a good friend and brings a wealth of business and management experience to our team.
- Much more. Of course, we worked on dozens of other, smaller projects. These include: updates to IntelligenceExplosion.com; development of contacts for Rationality Org; the organization of regular SI staff dinners, to promote coordination and friendship; speaking with donors at Peter Thiel’s “Fast Forward” party; development of a database of helpful volunteers and assistants; implementing Olark on our donate page; meetings with reporters from various media organizations; uploading old videos to Vimeo and YouTube; fixing errors and outdated content on our website; finishing our 2010 990 and sent it to Brandon Reinhart to add to his financial examination of the Machine Intelligence Research Institute, preparing a new template for SI research publications (courtesy of research associate Daniel Dewey); and much more.