MIRI Updates
John Horgan interviews Eliezer Yudkowsky
Scientific American writer John Horgan recently interviewed MIRI’s senior researcher and co-founder, Eliezer Yudkowsky. The email interview touched on a wide range of topics, from politics and religion to existential risk and Bayesian models of rationality. Although Eliezer isn’t speaking...
New paper: “Defining human values for value learners”
MIRI Research Associate Kaj Sotala recently presented a new paper, “Defining Human Values for Value Learners,” at the AAAI-16 AI, Society and Ethics workshop. The abstract reads: Hypothetical “value learning” AIs learn human values and then try to act according...
February 2016 Newsletter
Research updates New at IAFF: Thoughts on Logical Dutch Book Arguments; Another View of Quantilizers: Avoiding Goodhart’s Law; Another Concise Open Problem General updates Fundraiser and grant successes: MIRI will be working with AI pioneer Stuart Russell and a to-be-determined...
End-of-the-year fundraiser and grant successes
Our winter fundraising drive has concluded. Thank you all for your support! Through the month of December, 175 distinct donors gave a total of $351,298. Between this fundraiser and our summer fundraiser, which brought in $630k, we’ve seen a surge...
January 2016 Newsletter
Research updates A new paper: “Proof-Producing Reflection for HOL” A new analysis: Safety Engineering, Target Selection, and Alignment Theory New at IAFF: What Do We Need Value Learning For?; Strict Dominance for the Modified Demski Prior; Reflective Probability Distributions and...
Safety engineering, target selection, and alignment theory
Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability...