[mathjax] You want to figure something out, but you don’t know how to do that yet. You have to somehow break up the task into sub-computations. There is no atomic act of “thinking”; intelligence must be built up of...
[mathjax] Because the world is big, the agent as it is may be inadequate to accomplish its goals, including in its ability to think. Because the agent is made of parts, it can improve itself and become more capable. Improvements...
[mathjax] An agent which is larger than its environment can: Hold an exact model of the environment in its head. Think through the consequences of every potential course of action. If it doesn’t know the environment perfectly, hold...
[mathjax] Decision theory and artificial intelligence typically try to compute something resembling $$\underset{a \ \in \ Actions}{\mathrm{argmax}} \ \ f(a).$$ I.e., maximize some function of the action. This tends to assume that we can detangle things enough to see...
[mathjax] Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know. ((This is part...
The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start. (Somewhere in a not-very-near neighboring world, where science took a very different course…) ALFONSO: Hello, Beth. I’ve noticed a lot of...