Rationality: From AI to Zombies

Posted by & filed under News.

Between 2006 and 2009, senior MIRI researcher Eliezer Yudkowsky wrote several hundred essays for the blogs Overcoming Bias and Less Wrong, collectively called “the Sequences.” With two days remaining until Yudkowsky concludes his other well-known rationality book, Harry Potter and the Methods of Rationality, we are releasing around 340 of his original blog posts as a series of six books,…

Davis on AI capability and motivation

Posted by & filed under Analysis.

In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily resist and outsmart the united efforts of eight billion people” and achieve “virtual omnipotence,” and… Read more »

Brooks and Searle on AI volition and timelines

Posted by & filed under Analysis.

Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher Stuart Russell in “Transcending complacency on superintelligent machines” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and a number of journalists, scientists, and technologists have subsequently chimed in. Given the topic’s complexity,… Read more »

Groundwork for AGI safety engineering

Posted by & filed under Analysis.

Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI). A machine would qualify as an ‘AGI’, in the intended sense, if it could adapt to a very… Read more »