When AI Accelerates AI

Posted by & filed under Analysis.

Last week, Nate Soares outlined his case for prioritizing long-term AI safety work: 1. Humans have a fairly general ability to make scientific and technological progress. The evolved cognitive faculties that make us good at organic chemistry overlap heavily with the evolved cognitive faculties that make us good at economics, which overlap heavily with the… Read more »

August 2015 Newsletter

Posted by & filed under Newsletters.

Research updates We’ve rewritten the first and last sections of the main paper summarizing our research program. This version of the paper will also be published with minor changes in the Springer anthology The Technological Singularity. New analyses: Four Background Claims; MIRI’s Approach New at AI Impacts: Conversation with Steve Potter; Costs of Human-Level Hardware… Read more »

A new MIRI FAQ, and other announcements

Posted by & filed under News.

MIRI is at Effective Altruism Global! A number of the talks can be watched online at the EA Global Livestream. We have a new MIRI Frequently Asked Questions page, which we’ll be expanding as we continue getting new questions over the next four weeks. Questions covered so far include “Why is safety important for smarter-than-human… Read more »

July 2015 Newsletter

Posted by & filed under Newsletters.

Hello, all! I’m Rob Bensinger, MIRI’s Outreach Coordinator. I’ll be keeping you updated on MIRI’s activities and on relevant news items. If you have feedback or questions, you can get in touch with me by email. Research updates A new paper: "The Asilomar Conference: A Case Study in Risk Mitigation." New at AI Impacts: Update… Read more »

New report: “The Asilomar Conference: A Case Study in Risk Mitigation”

Posted by & filed under Papers.

Today we release a new report by Katja Grace, “The Asilomar Conference: A Case Study in Risk Mitigation” (PDF, 67pp). The 1975 Asilomar Conference on Recombinant DNA is sometimes cited as an example of successful action by scientists who preemptively identified an emerging technology’s potential dangers and intervened to mitigate the risk. We conducted this investigation to… Read more »

Rationality: From AI to Zombies

Posted by & filed under News.

Between 2006 and 2009, senior MIRI researcher Eliezer Yudkowsky wrote several hundred essays for the blogs Overcoming Bias and Less Wrong, collectively called “the Sequences.” With two days remaining until Yudkowsky concludes his other well-known rationality book, Harry Potter and the Methods of Rationality, we are releasing around 340 of his original blog posts as a series of six books,…

Davis on AI capability and motivation

Posted by & filed under Analysis.

In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily resist and outsmart the united efforts of eight billion people” and achieve “virtual omnipotence,” and… Read more »

Brooks and Searle on AI volition and timelines

Posted by & filed under Analysis.

Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher Stuart Russell in “Transcending complacency on superintelligent machines” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and a number of journalists, scientists, and technologists have subsequently chimed in. Given the topic’s complexity,… Read more »