MIRI Updates

New report: “The value learning problem”

Today we release a new technical report by Nate Soares, “The value learning problem.” If you’d like to discuss the paper, please do so here. Abstract: A superintelligent machine would not automatically act as intended: it will act as programmed,...

New report: “Formalizing Two Problems of Realistic World Models”

Today we release a new technical report by Nate Soares, “Formalizing two problems of realistic world models.” If you’d like to discuss the paper, please do so here. Abstract: An intelligent agent embedded within the real world must reason about...

New report: “Vingean Reflection: Reliable Reasoning for Self-Improving Agents”

Today we release a new technical report by Benja Fallenstein and Nate Soares, “Vingean Reflection: Reliable Reasoning for Self-Improving Agents.” If you’d like to discuss the paper, please do so here. Abstract: Today, human-level machine intelligence is in the domain...

An improved “AI Impacts” website

Recently, MIRI received a targeted donation to improve the AI Impacts website initially created by frequent MIRI collaborator Paul Christiano and part-time MIRI researcher Katja Grace. Collaborating with Paul and Katja, we ported the old content to a more robust...

New report: “Questions of reasoning under logical uncertainty”

Today we release a new technical report by Nate Soares and Benja Fallenstein, “Questions of reasoning under logical uncertainty.” If you’d like to discuss the paper, please do so here. Abstract: A logically uncertain reasoner would be able to reason...

Brooks and Searle on AI volition and timelines

Nick Bostrom’s concerns about the future of AI have sparked a busy public discussion. His arguments were echoed by leading AI researcher Stuart Russell in “Transcending complacency on superintelligent machines” (co-authored with Stephen Hawking, Max Tegmark, and Frank Wilczek), and...

Browse
Browse
Subscribe
Follow us on