|   |  News


Hello, I’m Nate Soares, and I’m pleased to be taking the reins at MIRI on Monday morning.

For those who don’t know me, I’ve been a research fellow at MIRI for a little over a year now. I attended my first MIRI workshop in December of 2013 while I was still working at Google, and was offered a job soon after. Over the last year, I wrote a dozen papers, half as primary author. Six of those papers were written for the MIRI technical agenda, which we compiled in preparation for the Puerto Rico conference put on by FLI in January 2015. Our technical agenda is cited extensively in the research priorities document referenced by the open letter that came out of that conference. In addition to the Puerto Rico conference, I attended five other conferences over the course of the year, and gave a talk at three of them. I also put together the MIRI research guide (a resource for students interested in getting involved with AI alignment research), and of course I spent a fair bit of time doing the actual research at workshops, at researcher retreats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.

I’ve always had a natural inclination towards leadership: in the past, I’ve led a F.I.R.S.T. Robotics team, managed two volunteer theaters, served as president of an Entrepreneur’s Club, and co-founded a startup or two. However, this is the first time I’ve taken a professional leadership role, and I’m grateful that I’ll be able to call upon the experience and expertise of the board, of our advisors, and of outgoing executive director Luke Muehlhauser.

MIRI has improved greatly under Luke’s guidance these last few years, and I’m honored to have the opportunity to continue that trend. I’ve spent a lot of time in conversation with Luke over the past few weeks, and he’ll remain a close advisor going forward. He and the management team have spent the last year or so really tightening up the day-to-day operations at MIRI, and I’m excited about all the opportunities we have open to us now.

The last year has been pretty incredible. Discussion of long-term AI risks and benefits has finally hit the mainstream, thanks to the success of Bostrom’s Superintelligence and FLI’s Puerto Rico conference, and due in no small part to years of movement-building and effort made possible by MIRI’s supporters. Over the last year, I’ve forged close connections with our friends at the Future of Humanity Institute, the Future of Life Institute, and the Centre for the Study of Existential Risk, as well as with a number of industry teams and academic groups who are focused on long-term AI research. I’m looking forward to our continued participation in the global conversation about the future of AI. These are exciting times in our field, and MIRI is well-poised to grow and expand. Indeed, one of my top priorities as executive director is to grow the research team.

That project is already well under way. I’m pleased to announce that Jessica Taylor has accepted a full-time position as a MIRI researcher starting in August 2015. We are also hosting a series of summer workshops focused on various technical AI alignment problems, the second of which is just now concluding. Additionally, we are working with the Center for Applied Rationality to put on a summer fellows program designed for people interested in gaining the skills needed for research in the field of AI alignment.

I want to take a moment to extend my heartfelt thanks to all those supporters of MIRI who have brought us to where we are today: We have a slew of opportunities before us, and it’s all thanks to your effort and support these past years. MIRI couldn’t have made it as far as it has without you. Exciting times are ahead, and your continued support will allow us to grow quickly and pursue all the opportunities that the last year opened up.

Finally, in case you want to get to know me a little better, I’ll be answering questions on the effective altruism forum at 3PM Pacific time on Thursday June 11th.



Two papers accepted to AGI-15

 |   |  News

MIRI has two papers forthcoming in the conference proceedings of AGI-15. The first paper, previously released as a MIRI technical report, is “Reflective variants of Solomonoff induction and AIXI,” by Benja Fallenstein, Nate Soares, and Jessica Taylor.

Two attemptsThe second paper, “Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings,” by Nate Soares and Benja Fallenstein, is a compressed version of some material from an earlier technical report. This new paper’s abstract is:

This paper motivates the study of counterpossibles (logically impossible counterfactuals) as necessary for developing a decision theory suitable for generally intelligent agents embedded within their environments. We discuss two attempts to formalize a decision theory using counterpossibles, one based on graphical models and another based on proof search.

Fallenstein will be attending AGI-15.

MIRI-related talks from the decision theory conference at Cambridge University

 |   |  News

Recently, MIRI co-organized a conference at Cambridge University titled Self-prediction in decision theory and artificial intelligence. At least six of the conference’s talks directly discussed issues raised in MIRI’s technical agenda:

  1. MIRI research fellow (and soon, Executive Director) Nate Soares gave a talk titled “What is a what if?” (.pdf w/o notes, .pptx w/ notes), on theories of counterfactuals in the context of AI.
  2. MIRI research fellow Patrick LaVictoire gave a talk titled “Decision theory and the logic of provability” (.pdf), on the modal agents framework.
  3. MIRI research fellow Benja Fallenstein gave a talk titled “Vingean reflection” (.pdf).
  4. Googler Vladimir Slepnev, a past MIRI workshop attendee, gave a talk titled “Models of decision-making based on logical counterfactuals” (.pdf).
  5. MIRI research associate Stuart Armstrong (Oxford) gave a talk titled “Anthropic decision theory” (.pdf, video).
  6. The conference also coincided with a public lecture by Stuart Russell titled “The long-term future of artificial intelligence” (video).

Our thanks to everyone who attended, and especially to our co-organizers: Arif Ahmed, Huw Price, and Seán Ó hÉigeartaigh!

A fond farewell and a new Executive Director

 |   |  News

LukeMeuhlhauser_w135Dear friends and supporters of MIRI,

I have some important news to share with you about the future of MIRI.

Given my passion for doing research, I’m excited to have accepted a research position at GiveWell. Like MIRI, GiveWell is an excellent cultural fit for me, and I believe they’re doing important work. I look forward to joining their team on June 1st. I’m also happy to report that I will be leaving MIRI in capable leadership hands.

Back in 2011, when MIRI’s Board of Directors asked me to take the Executive Director role, I was reluctant to leave the research position I held at the time. But I also wanted to do what best served MIRI’s mission. Looking back at the past three years, I’m proud of what the MIRI team has accomplished during my tenure as Executive Director. We’ve built a solid foundation, and our research program has picked up significant momentum. MIRI will continue to thrive as I transition out of my leadership role.

My enthusiasm for MIRI’s work remains as strong as ever, and I look forward to supporting MIRI going forward, both financially and as a close advisor. I’ll also continue to write about the future of AI on my personal blog.

Nate Soares will be stepping into the Executive Director role upon my departure, with unanimous support from myself and the rest of the Board.

Nate was our top choice for many reasons. During the past year at MIRI, Nate has demonstrated his commitment to the mission, his technical abilities, his strong work ethic, his ability to rapidly acquire new skills, his ability to work well with others, his ability to communicate clearly, his ability to think through big-picture strategic issues, and other aspects of executive capability.

During the transition, I’ll be sharing with Nate everything I think I’ve learned in the past three years about running an effective research institute, and I look forward to seeing where he leads MIRI next.

MIRI continues to seek additional research and executive capacity, and our need for both will only grow as I depart and as Nate transitions from a research role to the Executive Director role. If you are a math or computer science researcher, or if you have significant executive experience, and you are interested in participating in MIRI’s vital and significant research effort, please apply here.

May 2015 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research updates

News updates

Other updates

As always, please don't hesitate to let us know if you have any questions or comments.

Luke Muehlhauser
Executive Director

New papers on reflective oracles and agents

 |   |  News

We recently released two new papers on reflective oracles and agents.

The first is “Reflective oracles: A foundation for classical game theory,” by Benja Fallenstein, Jessica Taylor, and Paul Christiano.

reflective oraclesAbstract:

Classical game theory treats players as special—a description of a game contains a full, explicit enumeration of all players—even though in the real world, “players” are no more fundamentally special than rocks or clouds. It isn’t trivial to fi nd a decision-theoretic foundation for game theory in which an agent’s co-players are a non-distinguished part of the agent’s environment. Attempts to model both players and the environment as Turing machines, for example, fail for standard diagonalization reasons.

In this paper, we introduce a “reflective” type of oracle, which is able to answer questions about the outputs of oracle machines with access to the same oracle. These oracles avoid diagonalization by answering some queries randomly. We show that machines with access to a reflective oracle can be used to defi ne rational agents using causal decision theory. These agents model their environment as a probabilistic oracle machine, which may contain other agents as a non-distinguished part.

We show that if such agents interact, they will play a Nash equilibrium, with the randomization in mixed strategies coming from the randomization in the oracle’s answers. This can be seen as providing a foundation for classical game theory in which players aren’t special.

The second paper develops these ideas in the context of Solomonoff induction and Marcus Hutter’s AIXI. It is “Reflective variants of Solomonoff induction and AIXI,” by Benja Fallenstein, Nate Soares, and Jessica Taylor.

reflective AIXIAbstract:

Solomonoff induction and AIXI model their environment as an arbitrary Turing machine, but are themselves uncomputable. This fails to capture an essential property of real-world agents, which cannot be more powerful than the environment they are embedded in; for example, AIXI cannot accurately model game-theoretic scenarios in which its opponent is another instance of AIXI.

In this paper, we define reflective variants of Solomonoff induction and AIXI, which are able to reason about environments containing other, equally powerful reasoners. To do so, we replace Turing machines by probabilistic oracle machines (stochastic Turing machines with access to an oracle). We then use reflective oracles, which answer questions of the form, “is the probability that oracle machine M outputs 1 greater than p, when run on this same oracle?” Diagonalization can be avoided by allowing the oracle to answer randomly if this probability is equal to p; given this provision, reflective oracles can be shown to exist. We show that reflective Solomonoff induction and AIXI can themselves be implemented as oracle machines with access to a reflective oracle, making it possible for them to model environments that contain reasoners as powerful as themselves.

April 2015 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research updates

News updates

Other news

  • The Center for the Study of Existential Risk at the University of Cambridge is hiring four new research associates to work on their research project, "Towards a Science of Extreme Technological Risk."
  • The Future of Humanity Institute at the University of Oxford is hiring one researcher to work on the long-term AI control challenge.
  • The Future of Life Institute now has a News page.
  • Smarter Than Us and related books were recently reviewed in Financial Times.

As always, please don't hesitate to let us know if you have any questions or comments.

Luke Muehlhauser
Executive Director

Recent AI control brainstorming by Stuart Armstrong

 |   |  News

Oxford_Stuart-ArmstrongMIRI recently sponsored Oxford researcher Stuart Armstrong to take a solitary retreat and brainstorm new ideas for AI control. This brainstorming generated 16 new control ideas, of varying usefulness and polish. During the past month, he has described each new idea, and linked those descriptions from his index post: New(ish) AI control ideas.

He also named each AI control idea, and then drew a picture to represent (very roughly) how the new ideas related to each other. In the picture below, an arrow Y→X can mean “X depends on Y”, “Y is useful for X”, “X complements Y on this problem” or “Y inspires X.” The underlined ideas are the ones Stuart currently judges to be most important or developed.

Newish AI control ideas

Previously, Stuart developed the AI control idea of utility indifference, which plays a role in MIRI’s paper Corrigibility (Stuart is a co-author). He also developed anthropic decision theory and some ideas for reduced impact AI and oracle AI. He has contributed to the strategy and forecasting challenges of ensuring good outcomes from advanced AI, e.g. in Racing to the Precipice and How We’re Predicting AI — or Failing To. MIRI previously contracted him to write a short book introducing the superintelligence control challenge to a popular audience, Smarter Than Us.