The Machine Intelligence Research Institute is accepting applicants to two summer programs: a three-week AI robustness and reliability colloquium series (co-run with the Oxford Future of Humanity Institute), and a two-week fellows program focused on helping new researchers contribute to MIRI’s technical agenda (co-run with the Center for Applied Rationality). The Colloquium Series on Robust… Read more »
Posts By: Rob Bensinger
Seeking Research Fellows in Type Theory and Machine Self-Reference
The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive… Read more »
March 2016 Newsletter
Research updates A new paper: “Defining Human Values for Value Learners“ New at IAFF: Analysis of Algorithms and Partial Algorithms; Naturalistic Logical Updates; Notes from a Conversation on Act-Based and Goal-Directed Systems; Toy Model: Convergent Instrumental Goals New at AI Impacts: Global Computing Capacity A revised version of “The Value Learning Problem” (pdf) has been… Read more »
John Horgan interviews Eliezer Yudkowsky
Scientific American writer John Horgan recently interviewed MIRI’s senior researcher and co-founder, Eliezer Yudkowsky. The email interview touched on a wide range of topics, from politics and religion to existential risk and Bayesian models of rationality. Although Eliezer isn’t speaking in an official capacity in the interview, a number of the questions discussed are likely… Read more »
New paper: “Defining human values for value learners”
MIRI Research Associate Kaj Sotala recently presented a new paper, “Defining Human Values for Value Learners,” at the AAAI-16 AI, Society and Ethics workshop. The abstract reads: Hypothetical “value learning” AIs learn human values and then try to act according to those values. The design of such AIs, however, is hampered by the fact that… Read more »
February 2016 Newsletter
Research updates New at IAFF: Thoughts on Logical Dutch Book Arguments; Another View of Quantilizers: Avoiding Goodhart’s Law; Another Concise Open Problem General updates Fundraiser and grant successes: MIRI will be working with AI pioneer Stuart Russell and a to-be-determined postdoctoral researcher on the problem of corrigibility, thanks to a $75,000 grant by the Center… Read more »
January 2016 Newsletter
Research updates A new paper: “Proof-Producing Reflection for HOL” A new analysis: Safety Engineering, Target Selection, and Alignment Theory New at IAFF: What Do We Need Value Learning For?; Strict Dominance for the Modified Demski Prior; Reflective Probability Distributions and Standard Models of Arithmetic; Existence of Distributions That Are Expectation-Reflective and Know It; Concise Open… Read more »
The need to scale MIRI’s methods
Andrew Critch, one of the new additions to MIRI’s research team, has taken the opportunity of MIRI’s winter fundraiser to write on his personal blog about why he considers MIRI’s work important. Some excerpts: Since a team of CFAR alumni banded together to form the Future of Life Institute (FLI), organized an AI safety conference… Read more »