New paper: “Forecasting using incomplete models”

Posted by & filed under Papers.

MIRI Research Associate Vadim Kosoy has a paper out on issues in naturalized induction: “Forecasting using incomplete models”. Abstract: We consider the task of forecasting an infinite sequence of future observations based on some number of past observations, where the probability measure generating the observations is “suspected” to satisfy one or more of a set… Read more »

June 2018 Newsletter

Posted by & filed under Newsletters.

Updates New research write-ups and discussions: Logical Inductors Converge to Correlated Equilibria (Kinda) MIRI researcher Tsvi Benson-Tilsen and Alex Zhu ran an AI safety retreat for MIT students and alumni. Andrew Critch discusses what kind of advice to give to junior AI-x-risk-concerned researchers, and I clarify two points about MIRI’s strategic view. From Eliezer Yudkowsky: Challenges to… Read more »

May 2018 Newsletter

Posted by & filed under Newsletters.

Updates New research write-ups and discussions: Resource-Limited Reflective Oracles; Computing An Exact Quantilal Policy New at AI Impacts: Promising Research Projects MIRI research fellow Scott Garrabrant and associates Stuart Armstrong and Vadim Kosoy are among the winners in the second round of the AI Alignment Prize. First place goes to Tom Everitt and Marcus Hutter’s “The Alignment Problem… Read more »

April 2018 Newsletter

Posted by & filed under Newsletters.

Updates A new paper: “Categorizing Variants of Goodhart’s Law” New research write-ups and discussions: Distributed Cooperation; Quantilal Control for Finite Markov Decision Processes New at AI Impacts: Transmitting Fibers in the Brain: Total Length and Distribution of Lengths Scott Garrabrant, the research lead for MIRI’s agent foundations program, outlines focus areas and 2018 predictions for MIRI’s research. Scott presented on logical… Read more »

2018 research plans and predictions

Posted by & filed under MIRI Strategy.

Scott Garrabrant is taking over Nate Soares’ job of making predictions about how much progress we’ll make in different research areas this year. Scott divides MIRI’s alignment research into five categories: naturalized world-models — Problems related to modeling large, complex physical environments that lack a sharp agent/environment boundary. Central examples of problems in this category include… Read more »

March 2018 Newsletter

Posted by & filed under Newsletters.

Updates New research write-ups and discussions: Knowledge is Freedom; Stable Pointers to Value II: Environmental Goals; Toward a New Technical Explanation of Technical Explanation; Robustness to Scale New at AI Impacts: Likelihood of Discontinuous Progress Around the Development of AGI The transcript is up for Sam Harris and Eliezer Yudkowsky’s podcast conversation. Andrew Critch, previously on leave… Read more »

Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”

Posted by & filed under Conversations.

MIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris’ “Waking Up” podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse. The following is a complete transcript of Sam and Eliezer’s conversation, AI: Racing Toward the Brink. Contents 1. Intelligence… Read more »

February 2018 Newsletter

Posted by & filed under Newsletters.

Updates New at IAFF: An Untrollable Mathematician New at AI Impacts: 2015 FLOPS Prices We presented “Incorrigibility in the CIRL Framework” at the AAAI/ACM Conference on AI, Ethics, and Society. From MIRI researcher Scott Garrabrant: Sources of Intuitions and Data on AGI News and links In “Adversarial Spheres,” Gilmer et al. investigate the tradeoff between test… Read more »