Blog

Author: Eliezer Yudkowsky

Challenges to Christiano’s capability amplification proposal

[mathjax] The following is a basically unedited summary I wrote up on March 16 of my take on Paul Christiano’s AGI alignment approach (described in “ALBA” and “Iterated Distillation and Amplification”). Where Paul had comments and replies, I’ve included them...

A reply to Francois Chollet on intelligence explosion

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.” In response to critics of his essay, Chollet tweeted:   If...

Security Mindset and the Logistic Success Curve

Follow-up to:   Security Mindset and Ordinary Paranoia   (Two days later, Amber returns with another question.)   AMBER:  Uh, say, Coral. How important is security mindset when you’re building a whole new kind of system—say, one subject to potentially adverse optimization pressures,...

Security Mindset and Ordinary Paranoia

The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.   (AMBER, a philanthropist interested in a more reliable Internet, and CORAL, a computer security professional, are at a conference hotel together...

AlphaGo Zero and the Foom Debate

AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn’t pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of...

There’s No Fire Alarm for Artificial General Intelligence

  What is the function of a fire alarm?   One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit...