Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “ The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an IEEE recommendations document with a section on artificial general intelligence that we helped draft.
Research updates
General updates
- I’m happy to announce that our informal November/December fundraising push was a success, with donations totaling ~$450,000! To all of our supporters, on MIRI’s behalf: thank you. Special thanks to Raising for Effective Giving, who contributed ~$96,000 in all to our fundraiser and our end-of-the-year push.
- Open Philanthropy Project staff and 80,000 Hours highlight MIRI, the Future of Humanity Institute, and a number of other organizations as good giving opportunities for people still considering their donation options.
- Critch spoke at the annual meeting of the Society for Risk Analysis (slides). We also attended the Cambridge Conference on Catastrophic Risk and NIPS; see DeepMind researcher Viktoriya Krakovna’s NIPS safety paper highlights.
- MIRI Executive Director Nate Soares gave a talk on logical induction at EAGxOxford, and participated in a panel discussion on “The Long-Term Situation in AI” with Krakovna, Demis Hassabis, Toby Ord, and Murray Shanahan.
- Intelligence in Literature Prize: We’re helping administer a $100 prize each month to the best new fiction touching on ideas related to intelligence, AI, and the alignment problem. Send your submissions to intelligenceprize@gmail.com.
News and links
|