2014 Winter Matching Challenge!
Update: we have finished the matching challenge! Thanks everyone! The original post is below.
Thanks to the generosity of Peter Thiel,1 every donation made to MIRI between now and January 10th will be matched dollar-for-dollar, up to a total of $100,000!
$0
$25K
$50K
$75K
$100K
We have reached our matching total of $100,000!
83
Total Donors
Now is your chance to double your impact while helping us raise up to $200,000 (with matching) to fund our research program.
Corporate matching and monthly giving pledges will count towards the total! Check here to see whether your employer will match your donation. Please email malo@intelligence.org if you intend to make use of corporate matching, or if you’d like to pledge 6 months of monthly donations, so that we can properly account for your contributions. If making use of corporate matching, make sure to donate before the end of the year so that you don’t unnecessarily “leave free money on the table” from your employer!
If you’re unfamiliar with our mission, see: Why MIRI?
Accomplishments Since Our Summer 2014 Fundraiser Launched:
- 2 new papers and 1 new technical report: “Exploratory Engineering in AI,” “Corrigibility,” and “UDT with known search order.” Also, several new reports we’ve been working on should be released this month, including an overview of our technical agenda so far.
- 4 new analyses: “Groundwork for AGI safety engineering,” “AGI outcomes and civilizational competence,” “The Financial Times story on MIRI,” and “Three misconceptions in Edge.org’s conversation on ‘The Myth of AI’.”
- Released a new guide to MIRI’s research.
- Sponsored 13 active MIRIx groups in 5 different countries.
- Hosted 12 weeks of discussion in our ongoing Superintelligence reading group.
- Hosted a Nick Bostrom talk at UC Berkeley on Superintelligence — a packed house!
- Nate Soares gave a talk on decision theory at Purdue University.
- Participated in Effective Altruism Summit 2014, and posted our talks online.
- 5 new expert interviews, including John Fox on AI safety.
- Set up a program to provide Friendly AI research help.
Your Donations Will Support:
- As mentioned above, we’re finishing up several more papers and technical reports, including an overview of our technical agenda so far.
- We’re preparing the launch of an invite-only discussion forum devoted exclusively to technical FAI research. Beta users (who are also FAI researchers) have already posted more than a dozen technical discussions to the beta website. These will be available for all to see once the site launches publicly.
- We continue to grow the MIRIx program, mostly to enlarge the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.
- We’re planning, or helping to plan, multiple research workshops, including the May 2015 decision theory workshop at Cambridge University.
- We continue to host visiting researchers. For example in January we’re hosting Patrick LaVictoire and Matt Elder for multiple weeks.
- We’re finishing up several more strategic analyses, on AI safety and on the challenges of preparing wisely for disruptive technological change in general.
- We’re finishing the editing for a book version of Eliezer’s Sequences.
- We’re helping to fund further SPARC programs, which provide education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.
Other projects are being surveyed for likely cost and impact. See also our mid-2014 strategic plan.
We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward.
If you have questions about donating, please contact me (Luke Muehlhauser) at luke@intelligence.org.2
- Peter Thiel has pledged $150,000 to MIRI unconditionally, and an additional $100,000 conditional on us being able to raise matched funds from other donors. Hence this year our winter matching challenge goal is $100,000. Another reason this year’s winter fundraiser is smaller than last year’s winter challenge is that we’ve done substantially more fundraising before December this year than we did before December last year. ↩
- In particular, we expect that many of our donors holding views aligned with key ideas of effective altruism may want to know not just that donating to MIRI now will do some good but that donating to MIRI now will plausibly do more good than donating elsewhere would do (on the present margin, given the individual donor’s altruistic priorities and their model of the world). Detailed comparisons are beyond the scope of this announcement, but I have set aside time in my schedule to take phone calls with donors who would like to discuss such issues in detail, and I encourage you to email me to schedule such a call if you’d like to. (Also, I don’t have many natural opportunities to chat with most MIRI donors anyway, and I’d like to be doing more of it, so please don’t hesitate to email me and schedule a call!) ↩