October 2015 Newsletter

Posted by & filed under Newsletters.

Research updates New paper: Asymptotic Logical Uncertainty and The Benford Test New at IAFF: Proof Length and Logical Counterfactuals Revisited; Quantilizers Maximize Expected Utility Subject to a Conservative Cost Constraint General updates As a way to engage more researchers in mathematics, logic, and the methodology of science, Andrew Critch and Tsvi Benson-Tilsen are currently co-running… Read more »

New paper: “Asymptotic logical uncertainty and the Benford test”

Posted by & filed under Papers.

We have released a new paper on logical uncertainty, co-authored by Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd: “Asymptotic logical uncertainty and the Benford test.” Garrabrant gives some background on his approach to logical uncertainty on the Intelligent Agent Foundations Forum: The main goal of logical uncertainty is to… Read more »

September 2015 Newsletter

Posted by & filed under Newsletters.

Research updates New analyses: When AI Accelerates AI; Powerful Planners, Not Sentient Software New at AI Impacts: Research Bounties; AI Timelines and Strategies New at IAFF: Uniform Coherence 2; The Two-Update Problem Andrew Critch, a CFAR cofounder, mathematician, and former Jane Street trader, joined MIRI as our fifth research fellow this month! As a result… Read more »

AI and Effective Altruism

Posted by & filed under Analysis.

MIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. GiveDirectly is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from such disparate organizations — alongside policy analysts, philanthropists, philosophers, and many more? Effective Altruism Global,… Read more »

Assessing our past and potential impact

Posted by & filed under Analysis.

We’ve received several thoughtful questions in response to our fundraising post to the Effective Altruism Forum and our new FAQ. From quant trader Maxwell Fritz: My snap reaction to MIRI’s pitches has typically been, “yeah, AI is a real concern. But I have no idea whether MIRI are the right people to work on it,… Read more »

When AI Accelerates AI

Posted by & filed under Analysis.

Last week, Nate Soares outlined his case for prioritizing long-term AI safety work: 1. Humans have a fairly general ability to make scientific and technological progress. The evolved cognitive faculties that make us good at organic chemistry overlap heavily with the evolved cognitive faculties that make us good at economics, which overlap heavily with the… Read more »

August 2015 Newsletter

Posted by & filed under Newsletters.

Research updates We’ve rewritten the first and last sections of the main paper summarizing our research program. This version of the paper will also be published with minor changes in the Springer anthology The Technological Singularity. New analyses: Four Background Claims; MIRI’s Approach New at AI Impacts: Conversation with Steve Potter; Costs of Human-Level Hardware… Read more »

A new MIRI FAQ, and other announcements

Posted by & filed under News.

MIRI is at Effective Altruism Global! A number of the talks can be watched online at the EA Global Livestream. We have a new MIRI Frequently Asked Questions page, which we’ll be expanding as we continue getting new questions over the next four weeks. Questions covered so far include “Why is safety important for smarter-than-human… Read more »