Blog

Category: Analysis

Security Mindset and Ordinary Paranoia

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together...

AlphaGo Zero and the Foom Debate

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of...

There’s No Fire Alarm for Artificial General Intelligence

  What is the function of a fire alarm?   One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit...

Ensuring smarter-than-human intelligence has a positive outcome

I recently gave a talk at Google on the problem of aligning smarter-than-human AI with operators’ goals:     The talk was inspired by “AI Alignment: Why It’s Hard, and Where to Start,” and serves as an introduction to the...

Using machine learning to address AI risk

At the EA Global 2016 conference, I gave a talk on “Using Machine Learning to Address AI Risk”: It is plausible that future artificial general intelligence systems will share many qualities in common with present-day machine learning systems. If so,...

Response to Cegłowski on superintelligence

Web developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies...