Our 2015 Summer Fundraiser is underway!

A fond farewell and a new Executive Director

 |   |  News

LukeMeuhlhauser_w135Dear friends and supporters of MIRI,

I have some important news to share with you about the future of MIRI.

Given my passion for doing research, I’m excited to have accepted a research position at GiveWell. Like MIRI, GiveWell is an excellent cultural fit for me, and I believe they’re doing important work. I look forward to joining their team on June 1st. I’m also happy to report that I will be leaving MIRI in capable leadership hands.

Back in 2011, when MIRI’s Board of Directors asked me to take the Executive Director role, I was reluctant to leave the research position I held at the time. But I also wanted to do what best served MIRI’s mission. Looking back at the past three years, I’m proud of what the MIRI team has accomplished during my tenure as Executive Director. We’ve built a solid foundation, and our research program has picked up significant momentum. MIRI will continue to thrive as I transition out of my leadership role.

My enthusiasm for MIRI’s work remains as strong as ever, and I look forward to supporting MIRI going forward, both financially and as a close advisor. I’ll also continue to write about the future of AI on my personal blog.

Nate Soares will be stepping into the Executive Director role upon my departure, with unanimous support from myself and the rest of the Board.

Nate was our top choice for many reasons. During the past year at MIRI, Nate has demonstrated his commitment to the mission, his technical abilities, his strong work ethic, his ability to rapidly acquire new skills, his ability to work well with others, his ability to communicate clearly, his ability to think through big-picture strategic issues, and other aspects of executive capability.

During the transition, I’ll be sharing with Nate everything I think I’ve learned in the past three years about running an effective research institute, and I look forward to seeing where he leads MIRI next.

MIRI continues to seek additional research and executive capacity, and our need for both will only grow as I depart and as Nate transitions from a research role to the Executive Director role. If you are a math or computer science researcher, or if you have significant executive experience, and you are interested in participating in MIRI’s vital and significant research effort, please apply here.

May 2015 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research updates

News updates

Other updates

As always, please don't hesitate to let us know if you have any questions or comments.
 

Best,
Luke Muehlhauser
Executive Director

New papers on reflective oracles and agents

 |   |  News

We recently released two new papers on reflective oracles and agents.

The first is “Reflective oracles: A foundation for classical game theory,” by Benja Fallenstein, Jessica Taylor, and Paul Christiano.

reflective oraclesAbstract:

Classical game theory treats players as special—a description of a game contains a full, explicit enumeration of all players—even though in the real world, “players” are no more fundamentally special than rocks or clouds. It isn’t trivial to fi nd a decision-theoretic foundation for game theory in which an agent’s co-players are a non-distinguished part of the agent’s environment. Attempts to model both players and the environment as Turing machines, for example, fail for standard diagonalization reasons.

In this paper, we introduce a “reflective” type of oracle, which is able to answer questions about the outputs of oracle machines with access to the same oracle. These oracles avoid diagonalization by answering some queries randomly. We show that machines with access to a reflective oracle can be used to defi ne rational agents using causal decision theory. These agents model their environment as a probabilistic oracle machine, which may contain other agents as a non-distinguished part.

We show that if such agents interact, they will play a Nash equilibrium, with the randomization in mixed strategies coming from the randomization in the oracle’s answers. This can be seen as providing a foundation for classical game theory in which players aren’t special.

The second paper develops these ideas in the context of Solomonoff induction and Marcus Hutter’s AIXI. It is “Reflective variants of Solomonoff induction and AIXI,” by Benja Fallenstein, Nate Soares, and Jessica Taylor.

reflective AIXIAbstract:

Solomonoff induction and AIXI model their environment as an arbitrary Turing machine, but are themselves uncomputable. This fails to capture an essential property of real-world agents, which cannot be more powerful than the environment they are embedded in; for example, AIXI cannot accurately model game-theoretic scenarios in which its opponent is another instance of AIXI.

In this paper, we define reflective variants of Solomonoff induction and AIXI, which are able to reason about environments containing other, equally powerful reasoners. To do so, we replace Turing machines by probabilistic oracle machines (stochastic Turing machines with access to an oracle). We then use reflective oracles, which answer questions of the form, “is the probability that oracle machine M outputs 1 greater than p, when run on this same oracle?” Diagonalization can be avoided by allowing the oracle to answer randomly if this probability is equal to p; given this provision, reflective oracles can be shown to exist. We show that reflective Solomonoff induction and AIXI can themselves be implemented as oracle machines with access to a reflective oracle, making it possible for them to model environments that contain reasoners as powerful as themselves.

April 2015 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research updates

News updates

Other news

  • The Center for the Study of Existential Risk at the University of Cambridge is hiring four new research associates to work on their research project, "Towards a Science of Extreme Technological Risk."
  • The Future of Humanity Institute at the University of Oxford is hiring one researcher to work on the long-term AI control challenge.
  • The Future of Life Institute now has a News page.
  • Smarter Than Us and related books were recently reviewed in Financial Times.

As always, please don't hesitate to let us know if you have any questions or comments.
 

Best,
Luke Muehlhauser
Executive Director

Recent AI control brainstorming by Stuart Armstrong

 |   |  News

Oxford_Stuart-ArmstrongMIRI recently sponsored Oxford researcher Stuart Armstrong to take a solitary retreat and brainstorm new ideas for AI control. This brainstorming generated 16 new control ideas, of varying usefulness and polish. During the past month, he has described each new idea, and linked those descriptions from his index post: New(ish) AI control ideas.

He also named each AI control idea, and then drew a picture to represent (very roughly) how the new ideas related to each other. In the picture below, an arrow Y→X can mean “X depends on Y”, “Y is useful for X”, “X complements Y on this problem” or “Y inspires X.” The underlined ideas are the ones Stuart currently judges to be most important or developed.

Newish AI control ideas

Previously, Stuart developed the AI control idea of utility indifference, which plays a role in MIRI’s paper Corrigibility (Stuart is a co-author). He also developed anthropic decision theory and some ideas for reduced impact AI and oracle AI. He has contributed to the strategy and forecasting challenges of ensuring good outcomes from advanced AI, e.g. in Racing to the Precipice and How We’re Predicting AI — or Failing To. MIRI previously contracted him to write a short book introducing the superintelligence control challenge to a popular audience, Smarter Than Us.

2014 in review

 |   |  MIRI Strategy

It’s time for my review of MIRI in 2014.1 A post about our next strategic plan will follow in the next couple months, and I’ve included some details about ongoing projects at the end of this review.

 

2014 Summary

Since early 2013, MIRI’s core goal has been to help create a new field of research devoted to the technical challenges of getting good outcomes from future AI agents with highly general capabilities, including the capability to recursively self-improve.2

Launching a new field has been a team effort. In 2013, MIRI decided to focus on its comparative advantage in defining open problems and making technical progress on them. We’ve been fortunate to coordinate with other actors in this space — FHI, CSER, FLI, and others — who have leveraged their comparative advantages in conducting public outreach, building coalitions, pitching the field to grantmakers, interfacing with policymakers, and more.3

MIRI began 2014 with several open problems identified, and with some progress made toward solving them, but with very few people available to do the work. Hence, most of our research program effort in 2014 was aimed at attracting new researchers to the field and making it easier for them to learn the material and contribute. This was the primary motivation for our new technical agenda overview, the MIRIx program, our new research guide, and more (see below). Nick Bostrom’s Superintelligence was also quite helpful for explaining why this field of research should exist in the first place.

Today the field is much larger and healthier than it was at the beginning of 2014. MIRI now has four full-time technical researchers instead of just one. Around 85 people have attended one or more MIRIx workshops. There are so many promising researchers who have expressed interest in our technical research that ~25 of them have already confirmed interest and availability to attend a MIRI introductory workshop this summer, and this mostly doesn’t include people who have attended past MIRI workshops, nor have we sent out all the invites yet. Moreover, there are now several researchers we know who are plausible MIRI hires in the next 1-2 years.

I am extremely grateful to MIRI’s donors, without whom this progress would have been impossible.

The rest of this post provides a more detailed summary of our activities in 2014.

Read more »


  1. This year’s annual review is shorter than last year’s 5-part review of 2013, in part because 2013 was an unusually complicated focus-shifting year, and in part because, in retrospect, last year’s 5-part review simply took more effort to produce than it was worth. Also, because we recently finished switching to accrual accounting, I can now more easily provide annual reviews of each calendar year rather than of a March-through-February period. As such, this review of calendar year 2014 will overlap a bit with what was reported in the previous annual review (of March 2013 through February 2014). 
  2. Clearly there are forecasting and political challenges as well, and there are technical challenges related to ensuring good outcomes from nearer-term AI systems, but MIRI has chosen to specialize in the technical challenges of aligning superintelligence with human interests. See also: Friendly AI research as effective altruism and Why MIRI? 
  3. Obviously, the division of labor was more complex than I’ve described here. For example, FHI produced some technical research progress in 2014, and MIRI did some public outreach. 

New report: “An Introduction to Löb’s Theorem in MIRI Research”

 |   |  News

Lob in MIRI ResearchToday we publicly release a new technical report by Patrick LaVictoire, titled “An Introduction to Löb’s Theorem in MIRI Research.” The report’s introduction begins:

This expository note is devoted to answering the following question: why do many MIRI research papers cite a 1955 theorem of Martin Löb, and indeed, why does MIRI focus so heavily on mathematical logic? The short answer is that this theorem illustrates the basic kind of self-reference involved when an algorithm considers its own output as part of the universe, and it is thus germane to many kinds of research involving self-modifying agents, especially when formal verification is involved or when we want to cleanly prove things in model problems. For a longer answer, well, welcome!

I’ll assume you have some background doing mathematical proofs and writing computer programs, but I won’t assume any background in mathematical logic beyond knowing the usual logical operators, nor that you’ve even heard of Löb’s Theorem before.

If you’d like to discuss the article, please do so here.

Subscribe to the New Publications newsletter

Get notified every time a new technical paper is published.


Introducing the Intelligent Agent Foundations Forum

 |   |  News

IAFFToday we are proud to publicly launch the Intelligent Agent Foundations Forum (RSS), a forum devoted to technical discussion of the research problems outlined in MIRI’s technical agenda overview, along with similar research problems.

Patrick’s welcome post explains:

Broadly speaking, the topics of this forum concern the difficulties of value alignment- the problem of how to ensure that machine intelligences of various levels adequately understand and pursue the goals that their developers actually intended, rather than getting stuck on some proxy for the real goal or failing in other unexpected (and possibly dangerous) ways. As these failure modes are more devastating the farther we advance in building machine intelligences, MIRI’s goal is to work today on the foundations of goal systems and architectures that would work even when the machine intelligence has general creative problem-solving ability beyond that of its developers, and has the ability to modify itself or build successors.

The forum has been privately active for several months, so many interesting articles have already been posted, including:

Also see How to contribute.