May 2015 decision theory workshop at Cambridge University

 |   |  News

Cambridge UniversityMIRI, CSER, and the philosophy department at Cambridge University are co-organizing a decision theory workshop titled Self-Prediction in Decision Theory and AI, to be held in the Faculty of Philosophy at the Cambridge University. The tentative dates are May 13-19, 2015.

Huw Price and Arif Ahmed at Cambridge University are the lead organizers.

Speakers confirmed so far include:

MIRI’s July 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research Updates

News Updates

  • We’ve released our mid-2014 strategic plan update.
  • There are currently six active MIRIx groups around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run your own independently-organized MIRIx workshop!
  • Luke and Eliezer will be giving talks at the Effective Altruism Summit.
  • We are actively hiring for four positions: research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences

 

New report: “Non-omniscience, probabilistic inference, and metamathematics”

 |   |  News

Non-OmniscienceUC Berkeley student and MIRI research associate Paul Christiano has released a new report: “Non-omniscience, probabilistic inference, and metamathematics.”

Abstract:

We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.

Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.

What is the relation between this new report and Christiano et al.’s earlier “Definability of truth in probabilistic logic” report, discussed by John Baez here? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.

Roger Schell on long-term computer security research

 |   |  Conversations

Roger R. Schell portrait Roger R. Schell is a Professor of Engineering Practice at the University Of Southern California Viterbi School Of Engineering, and a member of the founding faculty for their Masters of Cyber Security degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of Aesec Corporation, a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “Class A1”) commercial product, the Gemini Multiprocessing Secure Operating System (GEMSOS). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the Trusted Computer System Evaluation Criteria (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the National Cyber Security Hall of Fame.

Read more »

New chapter in Cambridge Handbook of Artificial Intelligence

 |   |  News

cambridge handbook of AIThe Cambridge Handbook of Artificial Intelligence has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF here.

The abstract reads:

The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.

Our mid-2014 strategic plan

 |   |  MIRI Strategy

Summary

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:

  • Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
  • Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
  • Increase our investment in our Friendly AI (FAI) technical research agenda.

The reasons for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:

  • We produced much Friendly AI research progress on many different fronts, and do not remotely feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.
  • FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.
  • We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).

In short, I think we tested and validated MIRI’s new strategic focus, and now it is time to scale. Thus, our top goals for the next 6-12 months are to:

  1. Produce more Friendly AI research.
  2. Recruit more Friendly AI researchers.
  3. Fundraise heavily to support those activities.

Read more »

New report: “Distributions allowing tiling of staged subjective EU maximizers”

 |   |  News

Distributions allowing reportMIRI has released a new technical report by Eliezer Yudkowsky, “Distributions allowing tiling of staged subjective EU maximizers,” which summarizes some work done at MIRI’s May 2014 workshop.

Abstract:

We consider expected utility maximizers making a staged series of sequential choices, and replacing themselves with successors on each time-step (to represent self-modification). We wanted to find conditions under which we could show that a staged expected utility maximizer would replace itself with another staged EU maximizer (representing stability of this decision criterion under self-modification). We analyzed one candidate condition and found that the “Optimizer’s Curse” implied that maximization at each stage was not actually optimal. To avoid this, we generated an extremely artificial function η that should allow expected utility maximizers to tile. We’re still looking for the exact necessary and sufficient condition.

Allan Friedman on cybersecurity and cyberwar

 |   |  Conversations

Allan FriedmanMIRI recently interviewed Allan Friedman, co-author of Cybersecurity and Cyberwar: What Everyone Needs to Know.

We interviewed Dr. Friedman about cyberwar because the regulatory and social issues raised by the prospect of cyberwar may overlap substantially with those that will be raised by the prospect of advanced autonomous AI systems, such as those studied by MIRI.

Our GiveWell-style notes on this conversation are available in PDF format here.

As featured in:     Gizmodo   Good   NPR   SF Weekly   TIME