Blog

Author: Eliezer Yudkowsky

AI Alignment: Why It’s Hard, and Where to Start

Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is now available on Youtube:  ...

Five theses, two lemmas, and a couple of strategic implications

MIRI’s primary concern about self-improving AI isn’t so much that it might be created by ‘bad’ actors rather than ‘good’ actors in the global sphere; rather most of our concern is in remedying the situation in which no one knows...

Three Major Singularity Schools

I’ve noticed that Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.

...