Edward Kmett has joined the MIRI team! Edward is a prominent Haskell developer who popularized the use of lenses for functional programming, and currently maintains many of the libraries around the Haskell core libraries.
I’m also happy to announce another new recruit: James Payor. James joins the MIRI research team after three years at Draftable, a software startup. He previously studied math and CS at MIT, and he holds a silver medal from the International Olympiad in Informatics, one of the most prestigious CS competitions in the world.
In other news, we’ve today released a new edition of Rationality: From AI to Zombies with a fair amount of textual revisions and (for the first time) a print edition!
Finally, our 2018 fundraiser has passed the halfway mark on our first target! (And there’s currently $136,000 available in dollar-for-dollar donor matching through the Double Up Drive!)
Other updates
- A new paper from Stuart Armstrong and Sören Mindermann: “Occam’s Razor is Insufficient to Infer the Preferences of Irrational Agents.”
- New AI Alignment Forum posts: Kelly Bettors; Bounded Oracle Induction
- OpenAI’s Jack Clark and Axios discuss research-sharing in AI, following up on our 2018 Update post.
- A throwback post from Eliezer Yudkowsky: Should Ethicists Be Inside or Outside a Profession?
News and links
- New from the DeepMind safety team: Jan Leike’s Scalable Agent Alignment via Reward Modeling (arXiv) and Viktoriya Krakovna’s Discussion on the Machine Learning Approach to AI Safety.
- Two recently released core Alignment Forum sequences: Rohin Shah’s Value Learning and Paul Christiano’s Iterated Amplification.
- On the 80,000 Hours Podcast, Catherine Olsson and Daniel Ziegler discuss paths for ML engineers to get involved in AI safety.
- Nick Bostrom has a new paper out: “The Vulnerable World Hypothesis.”