In 2018 Update: Our New Research Directions, Nate Soares discusses MIRI’s new research; our focus on “deconfusion”; some of the thinking behind our decision to default to nondisclosure on new results; and why more people than you might think should come join the MIRI team!
Additionally, our 2018 fundraiser begins today! To kick things off, we’ll be participating in three separate matching campaigns, all focused around Giving Tuesday, Nov. 27; details in our fundraiser post.
Other updates
- New alignment posts: A Rationality Condition for CDT Is That It Equal EDT (1, 2); Standard ML Oracles vs. Counterfactual Ones; Addressing Three Problems with Counterfactual Corrigibility; When EDT=CDT, ADT Does Well. See also Paul Christiano’s EDT vs. CDT (1, 2).
- Embedded Agency: Scott Garrabrant and Abram Demski’s full sequence is up! The posts serve as our new core introductory resource to MIRI’s Agent Foundations research.
- “Sometimes people ask me what math they should study in order to get into agent foundations. My first answer is that I have found the introductory class in every subfield to be helpful, but I have found the later classes to be much less helpful. My second answer is to learn enough math to understand all fixed point theorems….” In Fixed Point Exercises, Scott provides exercises for getting into MIRI’s Agent Foundations research.
- MIRI is seeking applicants for a new series of AI Risk for Computer Scientists workshops, aimed at technical people who want to think harder about AI alignment.
News and links
- Vox unveils Future Perfect, a new section of their site focused on effective altruism.
- 80,000 Hours interviews Paul Christiano, including discussion of MIRI/Paul disagreements and Paul’s approach to AI alignment research.
- 80,000 Hours surveys effective altruism orgs on their hiring needs.
- From Christiano, Buck Shlegeris, and Dario Amodei: Learning Complex Goals with Iterated Amplification (arXiv paper).