Starting today, Scott Garrabrant has begun posting Cartesian Frames, a sequence introducing a new conceptual framework Scott has found valuable for thinking about agency.
In Scott's words: Cartesian Frames are “applying reasoning like Pearl's to objects like game theory's, with a motivation like Hutter's”.
Scott will be giving an online talk introducing Cartesian frames this Sunday at 12pm PT on Zoom (link). He'll also be hosting office hours on Gather.Town the next four Sundays; see here for details.
Other MIRI updates
- Abram Demski discusses the problem of comparing utilities, highlighting some non-obvious implications.
- In March 2020, the US Congress passed the CARES Act, which changes the tax advantages of donations to qualifying NPOs like MIRI in 2020. Changes include:
- 1. A new “above the line” tax deduction: up to $300 per taxpayer ($600 for a married couple) in annual charitable contributions for people who take the standard deduction. Donations to donor-advised funds (DAFs) do not qualify for this new deduction.
- 2. New charitable deduction limits: Taxpayers who itemize their deductions can deduct much greater amounts of their contributions. Individuals can elect to deduct donations up to 100% of their 2020 AGI — up from 60% previously. This higher limit also does not apply to donations to DAFs.
As usual, consult with your tax advisor for more information.
- Our fundraiser this year will start on November 29 (two days before Giving Tuesday) and finish on January 2. We're hoping that having our fundraiser straddle 2020 and 2021 will give people more flexibility given the unusual tax law changes above.
- I'm happy to announce that MIRI has received a donation of $246,435 from an anonymous returning donor. Our thanks to the donor, and to Effective Giving UK for facilitating this donation!
News and links
- Richard Ngo tries to provide a relatively careful and thorough version of the standard argument for worrying about AGI risk: AGI Safety from First Principles. See also Rohin Shah's summary.
- The AI Alignment Podcast interviews Andrew Critch about his recent overview paper, “AI Research Considerations for Human Existential Safety.”