MIRI Updates

MIRI’s November Newsletter

    Research Updates New Friendly AI research area: “Corrigibility.” New report: “UDT with known search order.” 2 new analyses: “AGI outcomes and civilizational competence” and “The Financial Times story on MIRI.” Video of Nate Soares’ decision theory talk at...

The Financial Times story on MIRI

Richard Waters wrote a story on MIRI and others for Financial Times, which also put Nick Bostrom’s Superintelligence at the top of its summer science reading list. It’s a good piece. Go read it and then come back here so...

New report: “UDT with known search order”

Today we release a new technical report from MIRI research associate Tsvi Benson-Tilsen: “UDT with known search order.” Abstract: We consider logical agents in a predictable universe running a variant of updateless decision theory. We give an algorithm to predict...

Singularity2014.com appears to be a fake

Earlier today I was alerted to the existence of Singularity2014.com (archived screenshot). MIRI has nothing to do with that website and we believe it is a fake. The website claims there is a “Singularity 2014″ conference “in the Bay Area”...

New paper: “Corrigibility”

Today we release a paper describing a new problem area in Friendly AI research we call corrigibility. The report (PDF) is co-authored by MIRI’s Friendly AI research team (Eliezer Yudkowsky, Benja Fallenstein, Nate Soares) and also Stuart Armstrong from the...

AGI outcomes and civilizational competence

The [latest IPCC] report says, “If you put into place all these technologies and international agreements, we could still stop warming at [just] 2 degrees.” My own assessment is that the kinds of actions you’d need to do that are...

Browse
Browse
Subscribe
Follow us on