We’ve put together a new website focused on the intelligence explosion concept: IntelligenceExplosion.com. The site is a “landing page” that provides an easy introduction to the topic for laymen and researchers alike.
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edsträm, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
The year 2011 has been huge for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Machine Intelligence Research Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
- Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.
- Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.
- Held a wildly successful Rationality Minicamp.
- Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘.
- Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis’, which has sparked broad discussion in academia, including an entire issue of Journal of Consciousness Studies and a book from Springer devoted to responses to Chalmers’ paper.
- Launched the Research Associates program.
- Brought MIT cosmologist Max Tegmark onto our advisory board, published our Singularity FAQ, and much more.
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in New York City this year.
- Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Machine Intelligence Research Institute planning and policy documents.
- Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As PayPal co-founder and Machine Intelligence Research Institute donor Peter Thiel said:
“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Machine Intelligence Research Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.
Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by Jaan Tallinn and Edwin Evans, raising a total of $250,000 to fund the Machine Intelligence Research Institute’s operations in 2011. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Keep watching this blog throughout the year for updates on our activity, and sign up for our mailing list if you haven’t yet.
Here’s to a better future for the human species.
We are preparing a donor page to provide a place for everyone who donated to share some information about themselves if they wish, including their name, location, and a quote about why they donate to the Machine Intelligence Research Institute. If you would like to be included in our public list, please email us.
Again, thank you. The Machine Intelligence Research Institute depends entirely on contributions from individual donors to exist. Money is indeed the unit of caring, and one of the easiest ways that anyone can contribute directly to the success of the Machine Intelligence Research Institute. Another important way you can help is by plugging us into your networks, so please email us if you want to help.
If you’re interested in connecting with other Machine Intelligence Research Institute supporters, we encourage joining our group on Facebook. There are also local Less Wrong meetups in cities like San Francisco, Los Angeles, New York, and London.
Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Machine Intelligence Research Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Machine Intelligence Research Institute exists to do so through its research, the Singularity Summit, and public education.
We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. MIRI researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.
We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time MIRI donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:
“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of MIRI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as MIRI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
— Jaan Tallinn, MIRI donor
Make a lasting impact on the long-term future of humanity today — make a donation to the Machine Intelligence Research Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at email@example.com or read our new organizational overview.
Thanks to our donors, yesterday we met our fundraising goal of $100,000 for the 2010 Singularity Research Challenge. MIRI would like to thank the grant’s matching donors and everyone who contributed. Every donation, however small, funds research and advocacy targeted towards maximizing the probability of a positive Singularity.
If you have any questions or comments about MIRI’s activity or would like to discuss targeted donations for future projects, please feel free to contact us anytime at admin at intelligence dot org. We also encourage you to subscribe to this blog, if you haven’t already, to stay up-to-date on MIRI’s activity.
Again, thank you, and here’s to a productive and successful 2010!
Offering unusually good philanthropic returns – meaning greater odds of a positive Singularity and lesser odds of human extinction — the Machine Intelligence Research Institute has launched a new challenge campaign. The sponsors, Edwin Evans, Rolf Nelson, Henrik Jonsson, Jason Joachim, and Robert Lecnik, have generously put up $100,000 of matching funds, so that every donation you make until February 28th will be matched dollar for dollar. If the campaign is successful, it will raise a full $200,000 to fund MIRI’s 2010 activities.
For almost a decade, the Machine Intelligence Research Institute has been asking questions on the future of human civilization: How can we benefit from increasingly powerful technology without succumbing to the risks, up to and including human extinction? What is the best way to handle artificial general intelligence (AGI): programs as smart as humans, or smarter?
Among MIRI’s core aims is to continue studying “Friendly AI”: AI that acts benevolently because it holds goals aligned with human values. This involves drawing on and contributing to fields like decision theory, computer science, cognitive and moral psychology, and technology forecasting.
Creating AI, especially the Friendly kind, is a difficult undertaking. We’re in it for as long as it takes, but we’ve been doing more than laying the groundwork for Friendly AI. We’ve been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively.
If you make a donation to the Machine Intelligence Research Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our “active projects” file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund – but many won’t, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.)
Donate now, and seize a better than usual chance to move our work forward.
Many friends of MIRI will know that I have been a supporter of its mission since its founding, and have rendered my informal assistance, including a major role in arranging matching funds for the Institute’s 2007 Challenge Grant.
I am pleased to take a more direct role in fostering its success as President of MIRI. I have left my previous role as Founder and Chief Strategist at SirGroovy.com, a growing online music licensing firm, and have been assuming responsibility for the management of the Machine Intelligence Research Institute over the last few weeks.
Prospective volunteers, donors, and aspiring researchers should now make contact with me rather than with Tyler Emerson.
To those who I am greeting for the first time, let me introduce myself. On a professional level, I hold a Master of Business Administration from Drexel University, and am coming from a role that combined management, research, analysis, and strategy in a fast-growing music licensing firm from its founding in New York. In that capacity, in a previous role at Aon, and in my academic studies I have been enduringly interested in finance and economics, particular the economics of technology and IP. Scientifically, I earned my undergraduate degree in biochemistry and have worked in several labs, as well as serving at the National Institute of Standards and Technology, and have extensively studied the history of science and technology, as well as the potential for biological cognitive enhancement. I have served with the Peace Corps in Kazakhstan, and am splitting my time between Manhattan, where I live with my wife Aruna, and Silicon Valley.
My interest in the safety of technological development, driven by the potentially grave ethical consequences, is over a decade old. I have been particularly focused on the potential of advanced nanotechnology and artificial intelligence, participating in forums such as Transvision and Foresight conferences, the SL4 mailing list, Overcoming Bias, and organizations such as MIRI and the Center for Responsible Nanotechnology (CRN). I plan to make my relevant work available at a single site, but in the meantime I will point to a small selection. For instance, I coauthored an analysis of the risks of advanced molecular manufacturing and mitigating strategies with Robert Freitas , and contributed “Corporate Cornucopia” as a member of CRN’s Global Task Force. Those who would like to see more can view Michael Anissimov’s archive of some of my writings at his website, and some of my most recent talks, an Institute of Ethics and Emerging Technologies presentation on the political implications of different conceptions of willpower and a Convergence08 talk on decision theory for humans, are available on the web.
As President, I plan to build on MIRI’s successes, such as the Singularity Summit, while also working to increase its internal and extramural research capabilities and output. In the course of the latter, I shall pay particular attention to the publication of research that improves the quality of our thinking about the potential and safety of advanced artificial intelligence, such as MIRI Research Fellow Eliezer Yudkowsky’s two contributions to the Oxford edited volume, Global Catastrophic Risks, and thus to better communicating internal research progress to our supporters.
Some of this work will involve indirect, meta-level contributions. For instance, recent work at MIRI by Rolf Nelson, Anna Salamon, Steven Rayhawk, Thomas McCabe and others has led to the development of a software tool for combining judgments about particular future scenarios and technological developments to reveal inconsistencies and enable the adoption of a more coherent probability assignment for planning. The content and algorithms of this tool have been completed, and work is now underway to finalize the software’s interface and make it publicly available to improve the quality of reasoning about interrelated technology scenarios, including those involving artificial intelligence.
Another effort involves conducting expert elicitation research to determine the state of academic and non-academic expert opinion regarding timelines and risks for advanced artificial intelligence.
Future research along these lines may explore particular biases and psychological factors affecting attitudes and reasoning related to artificial intelligence.
Other research will be directly focused on object-level problems. I plan to work vigorously to identify more promising extramural scholars whose work can be fruitfully promoted by MIRI grants, work such as MIRI-Canada Academic Prize Recipient Shane Legg’s “Machine Super-Intelligence.” At the same time I will be working to recruit and make best use of talented Research Fellows for MIRI’s internal efforts.
I look forward to describing further directions over the coming months, and invite the advice and opinions of the friends of MIRI at firstname.lastname@example.org.
I’ve noticed that Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.