August 2012 Newsletter

 |   |  Newsletters

This newsletter was sent to newsletter subscribers in early August, 2012.

Greetings from the Executive Director

The big news this month is that we surpassed our fundraising goal of raising $300,000 in the month of July. My thanks to everyone who donated! Your contributions will help us finish launching CFAR and begin to build a larger and more productive research team working on some of the most important research problems in the world. Luke Muehlhauser

Read more »

July 2012 Newsletter

 |   |  Newsletters

This newsletter was sent out to Machine Intelligence Research Institute newsletter subscribers in July 2012

Greetings from the Executive Director


Luke Muehlhauser

Friends of the Machine Intelligence Research Institute,

Greetings! Our new monthly newsletter will bring you the latest updates from the Machine Intelligence Research Institute (MIRI). (You can read earlier monthly progress updates here.)

These are exciting times at MIRI. We just launched our new website, and also the website for the Center for Applied Rationality. We have several research papers under development, and after a long hiatus from AI research, researcher Eliezer Yudkowsky is planning a new sequence of articles on “Open Problems in Friendly AI.”

We have also secured $150,000 in matching funds for a new fundraising drive. To help support us in our work toward a positive Singularity, please donate today and have your gift doubled!

Luke Muehlhauser
Machine Intelligence Research Institute Executive Director

Read more »

2012 Summer Singularity Challenge Success!

 |   |  News

Thanks to the effort of our donors, the 2012 Summer Singularity Challenge has been met! All $150,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $300,000 to fund the Machine Intelligence Research Institute’€™s operations. We reached our goal near 6pm on July 29th.

On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference.

Here’€™s to a better future for the human species.

2012 Summer Singularity Challenge

 |   |  News

Thanks to the generosity of several major donors, every donation to the Machine Intelligence Research Institute made now until July 31, 2012 will be matched dollar-for-dollar, up to a total of $150,000!

Donate Now!

$0

$37.5K

$75K

$112.5K

$150K

Now is your chance to double your impact while helping us raise up to $300,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!

Note: If you prefer to support rationality training, you are welcome to earmark your donations for “CFAR” (Center for Applied Rationality). Donations earmarked for CFAR will only be used for CFAR, and donations not earmarked for CFAR will only be used for Singularity research and outreach.

Since we published our strategic plan in August 2011, we have achieved most of the near-term goals outlined therein. Here are just a few examples:

In the coming year, the Machine Intelligence Research Institute plans to do the following:

  • Hold our annual Singularity Summit, this year in San Francisco!
  • Spin off the Center for Applied Rationality as a separate organization focused on rationality training, so that the Machine Intelligence Research Institute can be focused more exclusively on Singularity research and outreach.
  • Publish additional research on AI risk and Friendly AI.
  • Eliezer will write an “Open Problems in Friendly AI” sequence for Less Wrong. (For news on his rationality books, see here.)
  • Finish Facing the Singularity and publish ebook versions of Facing the Singularity and The Sequences, 2006-2009.
  • And much more! For details on what we might do with additional funding, see How to Purchase AI Risk Reduction.

If you’re planning to earmark your donation to CFAR (Center for Applied Rationality), here’s a preview of what CFAR plans to do in the next year:

  • Develop additional lessons teaching the most important and useful parts of rationality. CFAR has already developed and tested over 18 hours of lessons so far, including classes on how to evaluate evidence using Bayesianism, how to make more accurate predictions, how to be more efficient using economics, how to use thought experiments to better understand your own motivations, and much more.
  • Run immersive rationality retreats to teach from our curriculum and to connect aspiring rationalists with each other. CFAR ran pilot retreats in May and June. Participants in the May retreat called it “transformative” and “astonishing,” and the average response on the survey question, “Are you glad you came? (1-10)” was a 9.4. (We don’t have the June data yet, but people were similarly enthusiastic about that one.)
  • Run SPARC, a camp on the advanced math of rationality for mathematically gifted high school students. CFAR has a stellar first-year class for SPARC 2012; most students admitted to the program placed in the top 50 on the USA Math Olympiad (or performed equivalently in a similar contest).
  • Collect longitudinal data on the effects of rationality training, to improve our curriculum and to generate promising hypotheses to test and publish, in collaboration with other researchers. CFAR has already launched a one-year randomized controlled study tracking reasoning ability and various metrics of life success, using participants in our June minicamp and a control group.
  • Develop apps and games about rationality, with the dual goals of (a) helping aspiring rationalists practice essential skills, and (b) making rationality fun and intriguing to a much wider audience. CFAR has two apps in beta testing: one training players to update their own beliefs the right amount after hearing other people’s beliefs, and another training players to calibrate their level of confidence in their own beliefs. CFAR is working with a developer on several more games training people to avoid cognitive biases.
  • And more!

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed using either PayPal or Google Checkout. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

† $150,000 of total matching funds has been provided by Jaan Tallinn, Tomer Kagan, Alexei Andreev, and Brandon Reinhart.

Machine Intelligence Research Institute Progress Report, May 2012

 |   |  News

Past progress reports: April 2012, March 2012, February 2012January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in May 2012:

  • How to Purchase AI Risk Reduction: Luke wrote a series of posts on how to purchase AI risk reduction, with cost estimates for many specific projects. Some projects are currently in place at SI; others can be launched if we are able to raise sufficient funding.
  • Research articles: Luke continued to work with about a dozen collaborators on several developing research articles, including “Responses to Catastrophic AGI Risk,” mentioned here.
  • Other writings: Kaj Sotala, with help from Luke and many others, published How to Run a Successful Less Wrong Meetup Group. Carl published several articles: (1) Utilitarianism, contractualism, and self-sacrifice, (2) Philosophers vs. economists on discounting, (3) Economic growth: more costly disasters, better prevention, and (4) What to eat during impact winter? Eliezer wrote Avoid Motivated Cognition. Luke posted part 2 of his dialogue with Ben Goertzel about AGI.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Louie and SI’s new executive assistant Ioven Fables are hard at work on organizational development and transparency (some of which will be apparent when the new website launches).
  • Center for Applied Rationality (CFAR): The CFAR team continued to make progress toward spinning off this rationality-centric organization, in keeping with SI’s strategic plan. We also held the first summer minicamp, which surpassed our expectations and was very positively received. (More details on this will be compiled later.)
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in May 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, and Casey Pfluger. Thanks everyone! (And, our apologies if we forgot to name you!)

Machine Intelligence Research Institute Progress Report, April 2012

 |   |  News

Past progress reports: March 2012, February 2012January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in April 2012:

  • SPARC: Several MIRI staff members are working in collaboration with SI research associate Paul Christiano and a few others to develop a rationality camp for high school students with exceptional mathematical ability (SPARC). This is related to our efforts to spin off a new rationality-focused organization, and it is also a major step forward in our efforts to locate elite young math talent that may be useful in our research efforts.
  • Research articles: Luke published AI Risk Bibliography 2012. He is currently developing nearly a dozen other papers with a variety of co-authors. New SI research associate Kaj Sotala has two papers forthcoming in the International Journal of Machine ConsciousnessAdvantages of Artificial Intelligences, Uploads, and Digital Minds and Coalescing Minds: Brain Uploading-Related Group Mind Scenarios.
  • Other articles: Luke published a dialogue with AGI researcher Pei Wang and several more posts in the AI Risk and Opportunity series. Luke also worked with Kaj Sotala to develop an instructional booklet for Less Wrong meetup group organizers, which is nearly complete.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael launched the new Singularity Summit website, continued to work on the Machine Intelligence Research Institute’s new primary website, new annual report, and new newsletter design. Luke uploaded several more volunteer-prepared translations of Facing the Singularity. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.
  • Center for Applied Rationality (CFAR): “Rationality Group” now has a final name: the Center for Applied Rationality (CFAR). The CFAR team has been hard at work preparing for the upcoming rationality minicamps, as well as continuing to develop the overall strategy for the emerging organization.
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. Quixey co-founder and CEO Liron Shapira was added as an advisor.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in April 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, David Althaus, Tim Oertel, Casey Pfluger, Paul Gentemann, and John Maxwell. Thanks everyone! (And, our apologies if we forgot to name you!)

Machine Intelligence Research Institute Progress Report, March 2012

 |   |  News

Past progress reports: February 2012January 2012December 2011.

Fun fact of the day: The Machine Intelligence Research Institute’s research fellows and research associates have more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined.

Here’s what the Machine Intelligence Research Institute did in March 2012:

  • Research articles: Luke and Anna released an updated draft of Intelligence Explosion: Evidence and Import, and Luke and Louie released an updated draft of The Singularity and Machine Ethics. Luke submitted an article (co-authored with Nick Bostrom) to Communications of the ACM — an article on Friendly AI. Machine Intelligence Research Institute research associate Joshua Fox released two forthcoming articles co-authored with (past Machine Intelligence Research Institute Visiting Fellow) Roman Yampolskiy: Safety Engineering for Artificial General Intelligence and Artificial General Intelligence and the Human Mental Model.
  • Other articles: Luke published The AI Problem, with Solutions, How to Fix Science, Muehlhauser-Goertzel Dialogue Part 1, a list of journals that may publish articles on AI risk, and the first three posts in his series AI Risk and Opportunity: A Strategic Analysis. The Machine Intelligence Research Institute paid (past Visiting Fellow) Kaj Sotala to write most of a new instructional booklet for Less Wrong meetup group organizers, which should be published in the next month or two. Eliezer continued work on his new Bayes’ Theorem tutorial and other writing projects. Carl published Using degrees of freedom to change the past for fun and profit and Are pain and pleasure equally energy efficient?
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new Summit website, new annual report, and new newsletter design. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several more volunteer-prepared translations of Facing the Singularity. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.
  • Rationality Group: Per our strategic plan, we will launch this new “Rationality Group” organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In March, Rationality Group (led by Anna) contracted with Julia Galef and Michael Smith to work toward launching the organization. Eliezer continued to help Rationality Group develop and test its lessons. Rationality Group has begun offering prizes for suggesting exercises for developing rationality skills, starting with the skills of “Be Specific” and “Check Consequentialism.” Rationality Group has also announced three Minicamps on Rationality and Awesomeness, for May 11-13, June 22-24, and July 21-28. Apply now.
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. This included a two-week visit by Nick Beckstead, who worked with us on AI risk reduction strategy.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in March 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)

Machine Intelligence Research Institute Progress Report, February 2012

 |   |  News

Past progress reports: January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in February 2012:

  • Winter fundraiser completed: Thanks to the generous contributions of our supporters, our latest winter fundraiser was a success, raising much more than our target of $100,000!
  • Research articles: Luke and Anna published the Singularity Summit 2011 Workshop Report and released a draft of their article Intelligence Explosion: Evidence and Import, forthcoming in Springer’s The Singularity Hypothesis. Luke also worked on an article forthcoming in Communications of the ACM.
  • Other articles: Luke published a continuously updated list of Forthcoming and desired articles on AI risk. For Less Wrong, Carl published Feed the Spinoff Heuristic, and Luke published My Algorithm for Beating ProcrastinationA brief tutorial on preferences in AI, and Get Curious. Carl also published 4 articles on ethical careers for the 80,000 Hours blog (later posts will discuss optimal philanthropy and existential risks): How hard is it to become the Prime Minister of the United Kingdom?Entrepreneurship: a game of poker, not rouletteSoftware engineering: Britain vs. Silicon Valley, and 5 ways to be misled by salary rankings.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new website, and uploaded all past Singularity Summit videos to YouTube. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several volunteer-prepared translations of Facing the Singularity, and also a podcast for this online mini-book.
  • Grant awarded: The Machine Intelligence Research Institute awarded philosopher Rachael Briggs a $20,000 grant to write a paper on Eliezer Yudkowsky’s timeless decision theory. Two of Rachael’s papers — Distorted Reflection and Decision-Theoretic Paradoxes as Voting Paradoxes — have previously been selected as among the 10 best philosophy papers of the year by The Philosopher’s Annual.
  • Rationality Group: Anna and Eliezer continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our strategic plan, we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In February our Rationality Group team worked on curriculum development with several potential long-term hires, developed several rationality lessons which they tested (weekly) on small groups and iterated in response to feedback, spoke to advisors about how to build the organization and gather fundraising, and much more. The team also produced one example rationality lesson on sunk costs, including a presentation and exercise booklets. Note that Rationality Group is currently hiring curriculum developers, a remote executive assistant, and others, so apply here if you’re interested!
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities. Carl spent two weeks in Oxford visiting the Future of Humanity Institute and working with the researchers there.
  • Outsourcing: On Louie’s (sound) advice, the Machine Intelligence Research Institute is undergoing a labor transition such that most of the work we do (in hours) will eventually be performed not by our core staff but by (mostly remote) hourly contractors and volunteers, for example remote researchers, remote LaTeX workers, remote editors, and remote assistants. This shift provides numerous benefits, including (1) involving the broader community more directly in our work, (2) providing jobs for aspiring rationalists, and (3) freeing up our core staff to do the things that, due to accumulated rare expertise, only they can do.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in February 2012: Brian Rabkin, Cameron Taylor, Mitchell Owen, Gerard McCusker, Alex Richard, Andrew Homan, Vincent Vu, Gabriel Sztorc, Paul Gentemann, John Maxwell, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)