2010 Singularity Research Challenge Fulfilled!

 |   |  News

Thanks to our donors, yesterday we met our fundraising goal of $100,000 for the 2010 Singularity Research Challenge. MIRI would like to thank the grant’s matching donors and everyone who contributed. Every donation, however small, funds research and advocacy targeted towards maximizing the probability of a positive Singularity.

If you have any questions or comments about MIRI’s activity or would like to discuss targeted donations for future projects, please feel free to contact us anytime at admin at intelligence dot org. We also encourage you to subscribe to this blog, if you haven’t already, to stay up-to-date on MIRI’s activity.

Again, thank you, and here’s to a productive and successful 2010!

Announcing the 2010 Singularity Research Challenge

 |   |  News

Offering unusually good philanthropic returns –€” meaning greater odds of a positive Singularity and lesser odds of human extinction — the Machine Intelligence Research Institute has launched a new challenge campaign. The sponsors, Edwin Evans, Rolf Nelson, Henrik Jonsson, Jason Joachim, and Robert Lecnik, have generously put up $100,000 of matching funds, so that every donation you make until February 28th will be matched dollar for dollar. If the campaign is successful, it will raise a full $200,000 to fund MIRI’s 2010 activities.

For almost a decade, the Machine Intelligence Research Institute has been asking questions on the future of human civilization: How can we benefit from increasingly powerful technology without succumbing to the risks, up to and including human extinction? What is the best way to handle artificial general intelligence (AGI): programs as smart as humans, or smarter?

Among MIRI’s core aims is to continue studying “Friendly AI”: AI that acts benevolently because it holds goals aligned with human values. This involves drawing on and contributing to fields like decision theory, computer science, cognitive and moral psychology, and technology forecasting.

Creating AI, especially the Friendly kind, is a difficult undertaking. We’re in it for as long as it takes, but we’ve been doing more than laying the groundwork for Friendly AI. We’ve been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively.

If you make a donation to the Machine Intelligence Research Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our “€œactive projects”€ file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund –€” but many won’€™t, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.)

Donate now, and seize a better than usual chance to move our work forward.

Introducing Myself

 |   |  News

Many friends of MIRI will know that I have been a supporter of its mission since its founding, and have rendered my informal assistance, including a major role in arranging matching funds for the Institute’s 2007 Challenge Grant.

I am pleased to take a more direct role in fostering its success as President of MIRI. I have left my previous role as Founder and Chief Strategist at SirGroovy.com, a growing online music licensing firm, and have been assuming responsibility for the management of the Machine Intelligence Research Institute over the last few weeks.

Prospective volunteers, donors, and aspiring researchers should now make contact with me rather than with Tyler Emerson.

To those who I am greeting for the first time, let me introduce myself. On a professional level, I hold a Master of Business Administration from Drexel University, and am coming from a role that combined management, research, analysis, and strategy in a fast-growing music licensing firm from its founding in New York. In that capacity, in a previous role at Aon, and in my academic studies I have been enduringly interested in finance and economics, particular the economics of technology and IP. Scientifically, I earned my undergraduate degree in biochemistry and have worked in several labs, as well as serving at the National Institute of Standards and Technology, and have extensively studied the history of science and technology, as well as the potential for biological cognitive enhancement. I have served with the Peace Corps in Kazakhstan, and am splitting my time between Manhattan, where I live with my wife Aruna, and Silicon Valley.

My interest in the safety of technological development, driven by the potentially grave ethical consequences, is over a decade old. I have been particularly focused on the potential of advanced nanotechnology and artificial intelligence, participating in forums such as Transvision and Foresight conferences, the SL4 mailing list, Overcoming Bias, and organizations such as MIRI and the Center for Responsible Nanotechnology (CRN). I plan to make my relevant work available at a single site, but in the meantime I will point to a small selection. For instance, I coauthored an analysis of the risks of advanced molecular manufacturing and mitigating strategies with Robert Freitas , and contributed “Corporate Cornucopia” as a member of CRN’s Global Task Force. Those who would like to see more can view Michael Anissimov’s archive of some of my writings at his website, and some of my most recent talks, an Institute of Ethics and Emerging Technologies presentation on the political implications of different conceptions of willpower and a Convergence08 talk on decision theory for humans, are available on the web.

As President, I plan to build on MIRI’s successes, such as the Singularity Summit, while also working to increase its internal and extramural research capabilities and output. In the course of the latter, I shall pay particular attention to the publication of research that improves the quality of our thinking about the potential and safety of advanced artificial intelligence, such as MIRI Research Fellow Eliezer Yudkowsky’s two contributions to the Oxford edited volume, Global Catastrophic Risks, and thus to better communicating internal research progress to our supporters.

Some of this work will involve indirect, meta-level contributions. For instance, recent work at MIRI by Rolf Nelson, Anna Salamon, Steven Rayhawk, Thomas McCabe and others has led to the development of a software tool for combining judgments about particular future scenarios and technological developments to reveal inconsistencies and enable the adoption of a more coherent probability assignment for planning. The content and algorithms of this tool have been completed, and work is now underway to finalize the software’s interface and make it publicly available to improve the quality of reasoning about interrelated technology scenarios, including those involving artificial intelligence.

Another effort involves conducting expert elicitation research to determine the state of academic and non-academic expert opinion regarding timelines and risks for advanced artificial intelligence.

Future research along these lines may explore particular biases and psychological factors affecting attitudes and reasoning related to artificial intelligence.

Other research will be directly focused on object-level problems. I plan to work vigorously to identify more promising extramural scholars whose work can be fruitfully promoted by MIRI grants, work such as MIRI-Canada Academic Prize Recipient Shane Legg’s “Machine Super-Intelligence.” At the same time I will be working to recruit and make best use of talented Research Fellows for MIRI’s internal efforts.

I look forward to describing further directions over the coming months, and invite the advice and opinions of the friends of MIRI at admin@intelligence.org.

Yours,

Michael Vassar

The Power of Intelligence

 |   |  Analysis

In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.

Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.

Then came the Day of the Squishy Things.
Read more »