MIRI Updates
Robust Cooperation: A Case Study in Friendly AI Research
The paper “Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic” is among the clearer examples of theoretical progress produced by explicitly FAI-related research goals. What can we learn from this case study in Friendly AI research? How...
Two MIRI talks from AGI-11
Thanks in part to the volunteers at MIRI Volunteers, we can now release the videos, slides, and transcripts for two talks delivered at AGI-11. Both talks represent joint work by Anna Salamon and Carl Shulman, who were MIRI staff at...
Mike Frank on reversible computing
Michael P. Frank received his Bachelor of Science degree in Symbolic Systems from Stanford University in 1991, and his Master of Science and Doctor of Philosophy degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in...
Emil Vassev on Formal Verification
Dr. Emil Vassev received his M.Sc. in Computer Science (2005) and his Ph.D. in Computer Science (2008) from Concordia University, Montreal, Canada. Currently, he is a research fellow at Lero (the Irish Software Engineering Research Centre) at University of Limerick,...
How Big is the Field of Artificial Intelligence? (initial findings)
Co-authored with Jonah Sinick. How big is the field of AI, and how big was it in the past? This question is relevant to several issues in AGI safety strategy. To name just two examples: AI forecasting. Some people forecast...
Existential Risk Strategy Conversation with Holden Karnofsky
On January 16th, 2014, MIRI met with Holden Karnofsky to discuss existential risk strategy. The participants were: Eliezer Yudkowsky (research fellow at MIRI) Luke Muehlhauser (executive director at MIRI) Holden Karnofsky (co-CEO at GiveWell) We recorded and transcribed the conversation,...