August 2015 Newsletter

Posted by & filed under Newsletters.

Research updates We’ve rewritten the first and last sections of the main paper summarizing our research program. This version of the paper will also be published with minor changes in the Springer anthology The Technological Singularity. New analyses: Four Background Claims; MIRI’s Approach New at AI Impacts: Conversation with Steve Potter; Costs of Human-Level Hardware… Read more »

A new MIRI FAQ, and other announcements

Posted by & filed under News.

MIRI is at Effective Altruism Global! A number of the talks can be watched online at the EA Global Livestream. We have a new MIRI Frequently Asked Questions page, which we’ll be expanding as we continue getting new questions over the next four weeks. Questions covered so far include “Why is safety important for smarter-than-human… Read more »

July 2015 Newsletter

Posted by & filed under Newsletters.

Hello, all! I’m Rob Bensinger, MIRI’s Outreach Coordinator. I’ll be keeping you updated on MIRI’s activities and on relevant news items. If you have feedback or questions, you can get in touch with me by email. Research updates A new paper: "The Asilomar Conference: A Case Study in Risk Mitigation." New at AI Impacts: Update… Read more »

New report: “The Asilomar Conference: A Case Study in Risk Mitigation”

Posted by & filed under Papers.

Today we release a new report by Katja Grace, “The Asilomar Conference: A Case Study in Risk Mitigation” (PDF, 67pp). The 1975 Asilomar Conference on Recombinant DNA is sometimes cited as an example of successful action by scientists who preemptively identified an emerging technology’s potential dangers and intervened to mitigate the risk. We conducted this investigation to… Read more »

Rationality: From AI to Zombies

Posted by & filed under News.

Between 2006 and 2009, senior MIRI researcher Eliezer Yudkowsky wrote several hundred essays for the blogs Overcoming Bias and Less Wrong, collectively called “the Sequences.” With two days remaining until Yudkowsky concludes his other well-known rationality book, Harry Potter and the Methods of Rationality, we are releasing around 340 of his original blog posts as a series of six books,…

Davis on AI capability and motivation

Posted by & filed under Analysis.

In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily resist and outsmart the united efforts of eight billion people” and achieve “virtual omnipotence,” and… Read more »

Brooks and Searle on AI volition and timelines

Posted by & filed under Analysis.

Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher Stuart Russell in “Transcending complacency on superintelligent machines” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and a number of journalists, scientists, and technologists have subsequently chimed in. Given the topic’s complexity,… Read more »

Groundwork for AGI safety engineering

Posted by & filed under Analysis.

Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI). A machine would qualify as an ‘AGI’, in the intended sense, if it could adapt to a very… Read more »