MIRI updates
- MIRI won’t be running a formal fundraiser this year, though we’ll still be participating in Giving Tuesday and other matching opportunities. Visit intelligence.org/donate to donate and to get information on tax-advantaged donations, employer matching, etc.
- Giving Tuesday takes place on Nov. 30 at 5:00:00am PT. Facebook will 100%-match the first $2M donated — something that took less than 2 seconds last year. Facebook will then 10%-match the next $60M of donations made, which will plausibly take 1-3 hours. Details on optimizing your donation(s) to MIRI and other EA organizations can be found at EA Giving Tuesday, a Rethink Charity project.
News and links
- OpenAI announces a system that "solves about 90% as many [math] problems as real kids: a small sample of 9–12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems".
- Open Philanthropy has released a request for proposals "for projects in AI alignment that work with deep learning systems", including interpretability work (write-up by Chris Olah). Apply by Jan. 10.
- The TAI Safety Bibliographic Database now has a convenient frontend, developed by the Quantified Uncertainty Research Institute: AI Safety Papers.
- People who aren't members can now submit content to the AI Alignment Forum. You can find more info at the forum's Welcome & FAQ page.
- The LessWrong Team, now Lightcone Infrastructure, is hiring software engineers for LessWrong and for grantmaking, along with a generalist to help build an in-person rationality and longtermism campus. You can apply here.
- Redwood Research and Lightcone Infrastructure are hosting a free Jan. 3–22 Machine Learning for Alignment Bootcamp (MLAB) at Constellation. The curriculum is designed by Buck Shlegeris and App Academy co-founder Ned Ruggeri. "Applications are open to anyone who wants to upskill in ML; whether a student, professional, or researcher." Apply by November 15.