Security Mindset and Ordinary Paranoia

Posted by & filed under Analysis.

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building “secure”… Read more »

AlphaGo Zero and the Foom Debate

Posted by & filed under Analysis.

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet. The architecture has been simplified. Previous AlphaGo had a policy net that predicted… Read more »

AI Alignment: Why It’s Hard, and Where to Start

Posted by & filed under Analysis, Video.

Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is now available on Youtube:     We have an approximately complete transcript of the talk and Q&A session here, slides… Read more »

Three Major Singularity Schools

Posted by & filed under Analysis.

I’ve noticed that Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.