MIRI Updates
[mathjax] You want to figure something out, but you don’t know how to do that yet. You have to somehow break up the task into sub-computations. There is no atomic act of “thinking”; intelligence must be built up of...
[mathjax] Because the world is big, the agent as it is may be inadequate to accomplish its goals, including in its ability to think. Because the agent is made of parts, it can improve itself and become more capable. Improvements...
[mathjax] An agent which is larger than its environment can: Hold an exact model of the environment in its head. Think through the consequences of every potential course of action. If it doesn’t know the environment perfectly, hold...
[mathjax] Decision theory and artificial intelligence typically try to compute something resembling $$\underset{a \ \in \ Actions}{\mathrm{argmax}} \ \ f(a).$$ I.e., maximize some function of the action. This tends to assume that we can detangle things enough to see...
The AI Alignment Forum has left beta! Dovetailing with the launch, MIRI researchers Scott Garrabrant and Abram Demski will be releasing a new sequence introducing our research over the coming week, beginning here: Embedded Agents. (Shorter illustrated version here.) Other...
This is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they’ve put into developing this resource, and our congratulations on today’s launch! I am happy to announce that...