MIRI Updates

Davis on AI capability and motivation

In a review of Superintelligence, NYU computer scientist Ernest Davis voices disagreement with a number of claims he attributes to Nick Bostrom: that “intelligence is a potentially infinite quantity with a well-defined, one-dimensional value,” that a superintelligent AI could “easily...

New annotated bibliography for MIRI’s technical agenda

Today we release a new annotated bibliography accompanying our new technical agenda, written by Nate Soares. If you’d like to discuss the paper, please do so here. Abstract: How could superintelligent systems be aligned with the interests of humanity? This...

New mailing list for MIRI math/CS papers only

As requested, we now offer email notification of new technical (math or computer science) papers and reports from MIRI. Simply subscribe to the mailing list below. This list sends one email per new technical paper, and contains only the paper’s...

February 2015 Newsletter

Research Updates Four new reports in support of our new technical agenda overview, on logical uncertainty, Vingean reflection, realistic world models, and value learning. AI Impacts site relaunched with new content. Reply to Rodney Brooks and John Searle on AI...

New report: “The value learning problem”

Today we release a new technical report by Nate Soares, “The value learning problem.” If you’d like to discuss the paper, please do so here. Abstract: A superintelligent machine would not automatically act as intended: it will act as programmed,...

New report: “Formalizing Two Problems of Realistic World Models”

Today we release a new technical report by Nate Soares, “Formalizing two problems of realistic world models.” If you’d like to discuss the paper, please do so here. Abstract: An intelligent agent embedded within the real world must reason about...

Browse
Browse
Subscribe
Follow us on