Google DeepMind Research Scientist Laurent Orseau and MIRI Research Associate Stuart Armstrong have written a new paper on error-tolerant agent designs, “Safely interruptible agents.” The paper is forthcoming at the 32nd Conference on Uncertainty in Artificial Intelligence. Abstract: Reinforcement learning...
Research updates Two new papers split logical uncertainty into two distinct subproblems: “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.” New at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time...
Research updates A new paper: “Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents“ New at IAFF: What Does it Mean for Correct Operation to Rely on Transfer Learning?; Virtual Models of Virtual AIs in Virtual Worlds General updates...
MIRI Research Fellow Andrew Critch has written a new paper on cooperation between software agents in the Prisoner’s Dilemma, available on arXiv: “Parametric bounded Löb’s theorem and robust cooperation of bounded agents.” The abstract reads: Löb’s theorem and Gödel’s theorem...
The Machine Intelligence Research Institute is accepting applicants to two summer programs: a three-week AI robustness and reliability colloquium series (co-run with the Oxford Future of Humanity Institute), and a two-week fellows program focused on helping new researchers contribute to...
The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be...