MIRI Updates
In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily...
Today we release a new annotated bibliography accompanying our new technical agenda, written by Nate Soares. If you’d like to discuss the paper, please do so here. Abstract: How could superintelligent systems be aligned with the interests of humanity? This...
As requested, we now offer email notification of new technical (math or computer science) papers and reports from MIRI. Simply subscribe to the mailing list below. This list sends one email per new technical paper, and contains only the paper’s...
Research Updates Four new reports in support of our new technical agenda overview, on logical uncertainty, Vingean reflection, realistic world models, and value learning. AI Impacts site relaunched with new content. Reply to Rodney Brooks and John Searle on AI...
Today we release a new technical report by Nate Soares, “The value learning problem.” If you’d like to discuss the paper, please do so here. Abstract: A superintelligent machine would not automatically act as intended: it will act as programmed,...
Today we release a new technical report by Nate Soares, “Formalizing two problems of realistic world models.” If you’d like to discuss the paper, please do so here. Abstract: An intelligent agent embedded within the real world must reason about...