Note: This is a preamble to Finite Factored Sets, a sequence I’ll be posting over the next few weeks. This Sunday at noon Pacific time, I’ll be giving a Zoom talk (link) introducing Finite Factored Sets, a framework which I...
This is a joint post by MIRI Research Associate and DeepMind Research Scientist Ramana Kumar and MIRI Research Fellow Scott Garrabrant, cross-posted from the AI Alignment Forum and LessWrong. Human values and preferences are hard to specify, especially in complex...
This is the conclusion of the Embedded Agency series. Previous posts: Embedded Agents — Decision Theory — Embedded World-ModelsRobust Delegation — Subsystem Alignment A final word on curiosity, and intellectual puzzles: I described an embedded agent, Emmy,...
[mathjax] You want to figure something out, but you don’t know how to do that yet. You have to somehow break up the task into sub-computations. There is no atomic act of “thinking”; intelligence must be built up of...
[mathjax] Because the world is big, the agent as it is may be inadequate to accomplish its goals, including in its ability to think. Because the agent is made of parts, it can improve itself and become more capable. Improvements...
[mathjax] An agent which is larger than its environment can: Hold an exact model of the environment in its head. Think through the consequences of every potential course of action. If it doesn’t know the environment perfectly, hold...