MIRI AMA, and a talk on logical induction

Posted by & filed under News, Video.

Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments below or at Ask MIRI Anything. We’ve also posted a more detailed version of our… Read more »

October 2016 Newsletter

Posted by & filed under Newsletters.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s 2016 fundraiser is also live, and runs through the end of October.   Research updates Shtetl-Optimized and n-Category Café discuss the “Logical Induction”… Read more »

September 2016 Newsletter

Posted by & filed under Newsletters.

Research updates New at IAFF: Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning; Simplified Explanation of Stratification New at AI Impacts: Friendly AI as a Global Public Good We ran two research workshops this month: a veterans’ workshop on decision theory for long-time collaborators and staff, and a machine learning workshop focusing on generalizable environmental goals, impact… Read more »

August 2016 Newsletter

Posted by & filed under Newsletters.

Research updates A new paper: “Alignment for Advanced Machine Learning Systems.” Half of our research team will be focusing on this research agenda going forward, while the other half continues to focus on the agent foundations agenda. New at AI Impacts: Returns to Scale in Research Evan Lloyd represented MIRIxLosAngeles at AGI-16 this month, presenting… Read more »

New paper: “Alignment for advanced machine learning systems”

Posted by & filed under Papers.

MIRI’s research to date has focused on the problems that we laid out in our late 2014 research agenda, and in particular on formalizing optimal reasoning for bounded, reflective decision-theoretic agents embedded in their environment. Our research team has since grown considerably, and we have made substantial progress on this agenda, including a major breakthrough… Read more »

July 2016 Newsletter

Posted by & filed under Newsletters.

Research updates A new paper: “A Formal Solution to the Grain of Truth Problem.” The paper was presented at UAI-16, and describes the first general reduction of game-theoretic reasoning to expected utility maximization. Participants in MIRI’s recently-concluded Colloquium Series on Robust and Beneficial AI (CSRBAI) have put together AI safety environments for the OpenAI Reinforcement Learning Gym.1 Help is welcome creating more… Read more »

New paper: “A formal solution to the grain of truth problem”

Posted by & filed under Papers.

Future of Humanity Institute Research Fellow Jan Leike and MIRI Research Fellows Jessica Taylor and Benya Fallenstein have just presented new results at UAI 2016 that resolve a longstanding open problem in game theory: “A formal solution to the grain of truth problem.” Game theorists have techniques for specifying agents that eventually do well on… Read more »

June 2016 Newsletter

Posted by & filed under Newsletters.

Research updates New paper: “Safely Interruptible Agents.” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s press release. The paper has received (often hyperbolic) coverage from a number of press outlets, including Business… Read more »