Careers at MIRI

 |   |  News

We’ve published a new Careers page, which advertises current job openings at MIRI.

As always, we’re seeking math researchers to make progress on Friendly AI theory. If you’re interested, the next step is not to apply for the position directly, but to apply to attend a future MIRI research workshop.

We are also accepting applications for a grants manager, a science writer, and an executive assistant.

Visit our Careers page to apply.

careers

Ronald de Wolf on Quantum Computing

 |   |  Conversations

Ronald de Wolf portraitRonald de Wolf is a senior researcher at CWI and a part-time full professor at the University of Amsterdam. He obtained his PhD there in 2001 with a thesis about quantum computing and communication complexity, advised by Harry Buhrman and Paul Vitanyi. Subsequently he was a postdoc at UC Berkeley. His scientific interests include quantum computing, complexity theory, and learning theory.

He also holds a Master’s degree in philosophy (where his thesis was about Kolmogorov complexity and Occam’s razor), and enjoys classical music and literature.

Luke Muehlhauser: Before we get to quantum computing, let me ask you about philosophy. Among other topics, your MSc thesis discusses the relevance of computational learning theory to philosophical debates about Occam’s razor, which is the principle advocating that “among the theories, hypotheses, or explanations that are consistent with the facts, we are to prefer simpler over more complex ones.”

Though many philosophers and scientists adhere to the principle of Occam’s razor, it is often left ambiguous exactly what is meant by “simpler,” and also why this principle is justified in the first place. But in your thesis you write that “in certain formal settings we can, more or less, prove that certain versions of Occam’s Razor work.”

Philosophers are usually skeptical when I argue for K-complexity versions of Occam’s razor, as you do. For example, USC’s Kenny Easwaran once wrote, “I’ve never actually seen how [a K-complexity based simplicity measure] is supposed to solve anything, given that it always depends on a choice of universal machine.”

How would you reply, given your optimism about justifying Occam’s razor “in certain formal settings”?

Read more »

Robust Cooperation: A Case Study in Friendly AI Research

 |   |  Analysis

robots shaking hands (cropped)

The paper “Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic” is among the clearer examples of theoretical progress produced by explicitly FAI-related research goals. What can we learn from this case study in Friendly AI research? How were the results obtained? How did the ideas build on each other? Who contributed which pieces? Which kinds of synergies mattered?

To answer these questions, I spoke to many of the people who contributed to the “robust cooperation” result.

Read more »

Two MIRI talks from AGI-11

 |   |  News, Video

Thanks in part to the volunteers at MIRI Volunteers, we can now release the videos, slides, and transcripts for two talks delivered at AGI-11. Both talks represent joint work by Anna Salamon and Carl Shulman, who were MIRI staff at the time (back when MIRI was known as the “Singularity Institute”):

Salamon & Shulman (2011). Whole brain emulation as a platform for creating safe AGI. [Video] [Slides] [Transcript]

Shulman & Salamon (2011). Risk-averse preferences as an AGI safety technique. [Video] [Slides] [Transcript]

Mike Frank on reversible computing

 |   |  Conversations

Mike Frank portraitMichael P. Frank received his Bachelor of Science degree in Symbolic Systems from Stanford University in 1991, and his Master of Science and Doctor of Philosophy degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1994 and 1999 respectively. While at Stanford, he helped his team win the world championship in the 1990-91 International Collegiate Programming Competition sponsored by the Association for Computing Machinery. Over the course of his student years, he held research internships at IBM’s T.J. Watson Research Center, NASA’s Ames Research Center, NEC Research Institute, Stanford Research Institute, and the Center for Study of Language and Information at Stanford. He also spent the summer after his Freshman year as a software engineering intern at Microsoft. During 1998-1999, Mike stopped out of school for a year to work at a friend’s web startup (Stockmaster.com).

After graduation, he worked as a tenure-track Assistant Professor in the Computer and Information Science and Engineering department at the University of Florida from 1999-2004, and at the Electrical and Computer Engineering department at the Florida A&M University – Florida State University College of Engineering from 2004-2007. After an ill-fated attempt to start a business in 2007-2008, he returned to academia in a variety of short-term research and teaching positions in the Florida A&M Department of Physics and the FAMU-FSU College of Engineering. His present title is Associate in Engineering, and he spends most of his time supervising multidisciplinary senior engineering projects. Over the years, Dr. Frank’s research interests have spanned a number of different areas, including decision-theoretic artificial intelligence, DNA computing, reversible and quantum computing, market-based computing, secure election systems, and digital cash.

Luke Muehlhauser: Some long-term computing forecasts include the possibility of nanoscale computing, but efficient computing at that scale appears to require reversible computing due to the Landauer limit. Could you please explain what reversible computing is, and why it appears to be necessary for efficient computing beyond a certain point of miniaturization?

Read more »

Emil Vassev on Formal Verification

 |   |  Conversations

Emil Vassev portraitDr. Emil Vassev received his M.Sc. in Computer Science (2005) and his Ph.D. in Computer Science (2008) from Concordia University, Montreal, Canada. Currently, he is a research fellow at Lero (the Irish Software Engineering Research Centre) at University of Limerick, Ireland where he is leading the Lero’s participation in the ASCENS FP7 project and the Lero’s joint project with ESA on Autonomous Software Systems Development Approaches. His research focuses on knowledge representation and awareness for self-adaptive systems. A part from the main research, Dr. Vassev’s research interests include engineering autonomic systems, distributed computing, formal methods, cyber-physical systems and software engineering. He has published two books and over 100 internationally peer-reviewed papers. As part of his collaboration with NASA, Vassev has been awarded one patent with another one pending.

Luke Muehlhauser: In “Swarm Technology at NASA: Building Resilient Systems,” you and your co-authors write that:

To increase the survivability of [remote exploration] missions, NASA [uses] principles and techniques that help such systems become more resilient…

…Practice has shown that traditional development methods can’t guarantee software reliability and prevent software failures. Moreover, software developed using formal methods tends to be more reliable.

When talking to AI scientists, I notice that there seem to be at least two “cultures” with regard to system safety. One culture emphasizes the limitations of systems that are amenable to (e.g.) formal methods, and advises that developers use traditional AI software development methods to build a functional system, and try to make it safe near the end of the process. The other culture tends to think that getting strong safety guarantees is generally only possible when a system is designed “from the ground up” with safety in mind. Most machine learning people I speak to seem to belong to the former culture, whereas e.g. Kathleen Fisher and other people working on safety-critical systems seem to belong to the latter culture.

Do you perceive these two cultures within AI? If so, does the second sentence I quoted from your paper above imply that you generally belong to the second culture?

Read more »

How Big is the Field of Artificial Intelligence? (initial findings)

 |   |  Analysis

Co-authored with Jonah Sinick.

How big is the field of AI, and how big was it in the past?

This question is relevant to several issues in AGI safety strategy. To name just two examples:

  • AI forecasting. Some people forecast AI progress by looking at how much has been accomplished for each calendar year of research. But as inputs to AI progress, (1) AI funding, (2) quality-adjusted researcher years (QARYs), and (3) computing power are more relevant than calendar years.1 To use these metrics to predict future AI progress, we need to know how many dollars and QARYs and computing cycles at various times in the past have been required to produce the observed progress in AI thus far.
  • Leverage points. If most AI research funding comes from relatively few funders, or if most research is produced by relatively few research groups, then these may represent high-value leverage points through which one might influence the field as a whole, e.g. to be more concerned with the long-term social consequences of AI.

For these reasons and more, MIRI recently investigated the current size and past growth of the AI field. This blog post summarizes our initial findings, which are meant to provide a “quick and dirty” launchpad for future, more thorough research into the topic.

Read more »


  1. Another important input metric is theoretical progress imported from other fields, e.g. methods from statistics. 

Existential Risk Strategy Conversation with Holden Karnofsky

 |   |  Conversations, MIRI Strategy

On January 16th, 2014, MIRI met with Holden Karnofsky to discuss existential risk strategy. The participants were:

We recorded and transcribed the conversation, and then edited and paraphrased the transcript for clarity, conciseness, and to protect the privacy of some content. The resulting edited transcript is available in full here (41 pages).

Below is a summary of the conversation written by Karnofsky, then edited by Muehlhauser and Yudkowsky. Below the summary are some highlights from the conversation chosen by Karnofsky.

See also three previous conversations between MIRI and Holden Karnofsky: on MIRI strategy, on transparent research analyses, and on flow-through effects.

Read more »