MIRI Updates
December 2021 Newsletter
MIRI is offering $200,000 to build a dataset of AI-dungeon-style writing annotated with the thoughts used in the writing process, and an additional $1,000,000 for scaling that dataset an additional 10x: the Visible Thoughts Project. Additionally, MIRI is in the...
Ngo’s view on alignment difficulty
This post features a write-up by Richard Ngo on his views, with inline comments. Color key: Chat Google Doc content Inline comments 13. Follow-ups to the Ngo/Yudkowsky conversation 13.1. Alignment...
Conversation on technology forecasting and gradualism
This post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in 1, 2, 3,...
More Christiano, Cotra, and Yudkowsky on AI progress
This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard Ngo, and Carl Shulman), continuing from 1, 2, and 3. Color key: Chat by Paul...
Shulman and Yudkowsky on AI progress
This post is a transcript of a discussion between Carl Shulman and Eliezer Yudkowsky, following up on a conversation with Paul Christiano and Ajeya Cotra. Color key: Chat by Carl and Eliezer Other chat 9.14. Carl Shulman’s...
Biology-Inspired AGI Timelines: The Trick That Never Works
– 1988 – Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it “Artificial General Intelligence” because this term will not be coined for another...