MIRI updates
- MIRI's Eliezer Yudkowsky and Evan Hubinger comment in some detail on Ajeya Cotra's The Case for Aligning Narrowly Superhuman Models. This conversation touches on some of the more important alignment research views at MIRI, such as the view that alignment requires a thorough understanding of AGI systems' reasoning "under the hood", and the view that early AGI systems should most likely avoid human modeling if possible.
- From Eliezer Yudkowsky: A Semitechnical Introductory Dialogue on Solomonoff Induction. (Also discussed by Richard Ngo.)
- MIRI research associate Vanessa Kosoy discusses infra-Bayesianism on the AI X-Risk Research Podcast.
- Eliezer Yudkowsky and Chris Olah discuss ML transparency on social media.
News and links
- Brian Christian, author of The Alignment Problem: Machine Learning and Human Values, discusses his book on the 80,000 Hours Podcast.
- Chris Olah's team releases Multimodal Neurons in Artificial Neural Networks, on artificial neurons that fire for multiple conceptually related stimuli.
- Vitalik Buterin reflects on Inadequate Equilibria's arguments in the course of discussing prediction market inefficiencies.