Embedded World-Models
An agent which is larger than its environment can:
- Hold an exact model of the environment in its head.
- Think through the consequences of every potential course of action.
- If it doesn’t know the environment perfectly, hold every possible way the environment could be in its head, as is the case with Bayesian uncertainty.
All of these are typical of notions of rational agency.
An embedded agent can’t do any of those things, at least not in any straightforward way.
One difficulty is that, since the agent is part of the environment, modeling the environment in every detail would require the agent to model itself in every detail, which would require the agent’s self-model to be as “big” as the whole agent. An agent can’t fit inside its own head.
The lack of a crisp agent/environment boundary forces us to grapple with paradoxes of self-reference. As if representing the rest of the world weren’t already hard enough.
Embedded World-Models have to represent the world in a way more appropriate for embedded agents. Problems in this cluster include:
- the “realizability” / “grain of truth” problem: the real world isn’t in the agent’s hypothesis space
- logical uncertainty
- high-level models
- multi-level models
- ontological crises
- naturalized induction, the problem that the agent must incorporate its model of itself into its world-model
- anthropic reasoning, the problem of reasoning with how many copies of yourself exist