We have released a new paper on logical uncertainty, co-authored by Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd: “Asymptotic logical uncertainty and the Benford test.” Garrabrant gives some background on his approach to logical...
Research updates New analyses: When AI Accelerates AI; Powerful Planners, Not Sentient Software New at AI Impacts: Research Bounties; AI Timelines and Strategies New at IAFF: Uniform Coherence 2; The Two-Update Problem Andrew Critch, a CFAR cofounder, mathematician, and former...
MIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. GiveDirectly is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from...
We’ve received several thoughtful questions in response to our fundraising post to the Effective Altruism Forum and our new FAQ. From quant trader Maxwell Fritz: My snap reaction to MIRI’s pitches has typically been, “yeah, AI is a real concern....
Last week, Nate Soares outlined his case for prioritizing long-term AI safety work: 1. Humans have a fairly general ability to make scientific and technological progress. The evolved cognitive faculties that make us good at organic chemistry overlap heavily with...
Research updates We’ve rewritten the first and last sections of the main paper summarizing our research program. This version of the paper will also be published with minor changes in the Springer anthology The Technological Singularity. New analyses: Four Background...