The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.
MIRI is a mathematics and computer science research institute specializing in long-term AI safety and robustness work. Our offices are in Berkeley, California, near the UC Berkeley campus.
Type Theory in Type Theory
Our goal with this project is to build tools for better modeling reflective reasoning in software systems, as with our project modeling the HOL4 proof assistant within itself. There are Gödelian reasons to think that self-referential reasoning is not possible in full generality. However, many real-world tasks that cannot be solved in full generality admit of effective mostly-general or heuristic approaches. Humans, for example, certainly succeed in trusting their own reasoning in many contexts.
There are a number of tools missing in modern-day theorem provers that would be helpful for studying self-referential reasoning. First among these is theorem provers that can construct proofs about software systems that make use of a very similar theorem prover. To build these tools in a strongly typed programming language, we need to start by writing programs and proofs that can make reference to the type of programs and proofs in the same language.
Type theory in type theory has recently received a fair amount of attention. James Chapman’s work is pushing in a similar direction to what we want, as is Matt Brown and Jens Palsberg’s, but these projects don’t yet give us the tools we need. (F-omega is too weak a logic for our purposes, and methods like Chapman’s don’t get us self-representations.)
This is intended to be an independent research project, though some collaborations with other researchers may occur. Our expectation is that this will be a multi-year project, but it is difficult to predict exactly how difficult this task is in advance. It may be easier than it looks, or substantially more difficult.
Depending on how the project goes, researchers interested in continuing to work with us after this project’s completion may be able to collaborate on other parts of our research agenda or propose their own additions to our program.
Working at MIRI
We try to make working at MIRI a great experience. Here’s how we operate:
- Modern Work Spaces. Many of us have adjustable standing desks with large external monitors. We consider workspace ergonomics important, and try to rig up work stations to be as comfortable as possible. Free snacks, drinks, and meals are also provided at our office.
- Flexible Hours. This is a salaried position. We don’t have strict office hours, and we don’t limit employees’ vacation days. Our goal is to make quick progress on our research agenda, and we would prefer that researchers take a day off than that they extend tasks to fill an extra day.
- Living in the Bay Area. MIRI’s office is located in downtown Berkeley, California. From our office, you’re a 30-second walk to the BART (Bay Area Rapid Transit), which can get you around the Bay Area; a 3-minute walk to UC Berkeley campus; and a 30-minute BART ride to downtown San Francisco.
- Travel Assistance. Visa assistance is available if needed. If you are moving to the Bay Area, we’ll cover up to $3,500 in moving expenses. We also provide a public transit pass with a large monthly allowance.
The salary for this position is negotiable, and comes with top-notch health and dental benefits.
About MIRI
MIRI is a Berkeley-based research nonprofit studying foundational questions in artificial intelligence. Our goal is to ensure that high-quality decision-making systems have a positive global impact in coming decades. MIRI Research Advisor and AI pioneer Stuart Russell outlines several causes for concerns about high-capability AI software:
- The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
- Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.
A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.
Our focus is on systems too complex and autonomous for software developers to anticipate all possible states the system and its environment might evolve into. To employ such systems safely, software engineers will need to understand their behavior on a deep level, and be able to use machine analysis and verification techniques to confirm high-level system properties.
MIRI is the primary organization specializing in this line of technical research. Our work is cited in the Research Priorities for Robust and Beneficial Artificial Intelligence report, and has been discussed in Nick Bostrom’s Superintelligence and Russell and Norvig’s Artificial Intelligence: A Modern Approach.
Long-term AI safety is a rapidly growing field of research that has recently received significant public attention. Our current research is intended to provide this new field with theoretical foundations that will help guide the direction of future research.
How to Apply
(Update 2022: We are not actively seeking to fill this role at this time.)