MIRI researcher Scott Garrabrant has completed his Cartesian Frames sequence. Scott also covers the first two posts' contents in video form.
Other MIRI updates
- Contrary to my previous announcement, MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. To donate and get information on tax-advantaged donations, employer matching, etc., see intelligence.org/donate. We’ll also be doing an end-of-the-year update and retrospective in the next few weeks.
- Facebook's Giving Tuesday matching event takes place this Tuesday (Dec. 1) at 5:00:00am PT. Facebook will 100%-match the first $2M donated, something that will plausibly take only 2–3 seconds. To get 100%-matched, then, it's even more important than last year to start clicking at 4:59:59AM PT. Facebook will then 10%-match the next $50M of donations that are made. Details on optimizing your donation(s) to MIRI's Facebook Fundraiser can be found at EA Giving Tuesday, a Rethink Charity project.
- Video discussion: Stuart Armstrong, Scott Garrabrant, and the AI Safety Reading Group discuss Stuart's If I Were A Well-Intentioned AI….
- MIRI research associate Vanessa Kosoy raises questions about AI information hazards.
- Buck Shlegeris argues that we're likely at the “hinge of history” (assuming we aren't living in a simulation).
- To make it easier to find and cite old versions of MIRI papers (especially ones that aren't on arXiv), we've collected links to obsolete versions on intelligence.org/revisions.
News and links
- CFAR's Anna Salamon asks: Where do (did?) stable, cooperative institutions come from?
- The Center for Human-Compatible AI is accepting applications for research internships through Dec. 13.
- AI Safety (virtual) Camp is accepting applications through Dec. 15.
- The 4th edition of Artificial Intelligence: A Modern Approach is out, with expanded discussion of the alignment problem.
- DeepMind's Rohin Shah reviews Brian Christian's new book The Alignment Problem: Machine Learning and Human Values.
- Daniel Filan and Rohin Shah discuss security mindset and takeoff speeds.
- Fortune profiles existential risk philanthropist Jaan Tallinn.