New chapter in Cambridge Handbook of Artificial Intelligence

 |   |  Papers

cambridge handbook of AIThe Cambridge Handbook of Artificial Intelligence has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF here.

The abstract reads:

The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.

Our mid-2014 strategic plan

 |   |  MIRI Strategy

Summary

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:

  • Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
  • Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
  • Increase our investment in our Friendly AI (FAI) technical research agenda.

The reasons for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:

  • We produced much Friendly AI research progress on many different fronts, and do not remotely feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.
  • FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.
  • We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).

In short, I think we tested and validated MIRI’s new strategic focus, and now it is time to scale. Thus, our top goals for the next 6-12 months are to:

  1. Produce more Friendly AI research.
  2. Recruit more Friendly AI researchers.
  3. Fundraise heavily to support those activities.

Read more »

New report: “Distributions allowing tiling of staged subjective EU maximizers”

 |   |  Papers

Distributions allowing reportMIRI has released a new technical report by Eliezer Yudkowsky, “Distributions allowing tiling of staged subjective EU maximizers,” which summarizes some work done at MIRI’s May 2014 workshop.

Abstract:

We consider expected utility maximizers making a staged series of sequential choices, and replacing themselves with successors on each time-step (to represent self-modification). We wanted to find conditions under which we could show that a staged expected utility maximizer would replace itself with another staged EU maximizer (representing stability of this decision criterion under self-modification). We analyzed one candidate condition and found that the “Optimizer’s Curse” implied that maximization at each stage was not actually optimal. To avoid this, we generated an extremely artificial function η that should allow expected utility maximizers to tile. We’re still looking for the exact necessary and sufficient condition.

Allan Friedman on cybersecurity and cyberwar

 |   |  Conversations

Allan FriedmanMIRI recently interviewed Allan Friedman, co-author of Cybersecurity and Cyberwar: What Everyone Needs to Know.

We interviewed Dr. Friedman about cyberwar because the regulatory and social issues raised by the prospect of cyberwar may overlap substantially with those that will be raised by the prospect of advanced autonomous AI systems, such as those studied by MIRI.

Our GiveWell-style notes on this conversation are available in PDF format here.

MIRI’s June 2014 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends,

The SV Gives fundraiser was a big success for many organizations, and especially for MIRI. Thanks so much, everyone!

Research Updates

News Updates

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

Milind Tambe on game theory in security applications

 |   |  Conversations

Milind Tambe portraitMilind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California (USC). He is a fellow of AAAI (Association for Advancement of Artificial Intelligence) and ACM (Association for Computing Machinery), as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, the INFORMS Wagner prize for excellence in Operations Research Practice, the Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa Foundation Faculty Research Award, RoboCup scientific challenge award, USC Associates Award for Creativity in Research and USC Viterbi School of Engineering use-inspired research award.

Prof. Tambe has contributed several foundational papers in agents and multiagent systems; this includes areas of multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the “influential paper award” from the International Foundation for Agents and Multiagent Systems (IFAAMAS), as well as with his research group, best paper awards at a number of premier Artificial Intelligence Conferences and workshops; these have included multiple best paper awards at the International Conference on Autonomous Agents and Multiagent Systems and International Conference on Intelligent Virtual Agents.

In addition, the “security games” framework and algorithms pioneered by Prof. Tambe and his research group are now deployed for real-world use by several agencies including the US Coast Guard, the US Federal Air Marshals service, the Transportation Security Administration, LAX Police and the LA Sheriff’s Department for security scheduling at a variety of US ports, airports and transportation infrastructure. This research has led to him and his students receiving the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District’s Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by the Los Angeles World Airports police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. Recently, he co-founded ARMORWAY, a company focused on risk mitigation and security resource optimization, where he serves on the board of directors. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.

Luke Muehlhauser: In Tambe et al. (2013), you and your co-authors give an overview of game theory in security applications, saying:

Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies.

You then give many examples of game-theoretic algorithms used for security at airports, borders, etc.

Is there evidence to suggest that the introduction of these systems has improved the security of the airports, borders, etc. relative to whatever security processes they were using before?

Read more »

New report: “Loudness: On priors over preference relations”

 |   |  Papers

loudness first pageToday we release the first technical report from our May 2014 workshop: “Loudness: on priors over preference relations” by Benja Fallenstein and Nisan Stiennon. Other technical reports from that workshop are also in progress. Here’s the abstract for this report:

This is a quick writeup of a problem discussed at the May 2014 MIRI workshop: how to formally deal with uncertainty about preferences. We assume that the true preferences satisfy the von Neumann-Morgenstern (VNM) axioms, and can therefore be represented by a utility function. It may seem that we should then simply maximize the expectation of this function. However, in the absence of more information, this is not well-defined; in this setting, different choices of utility functions representing the same VNM preferences can lead the agent to make different choices. We give a formalization of this problem and show that the choice of a prior probability distribution over VNM preference relations together with the choice of a representative for each of these distributions is in a certain sense equivalent to the choice of a single number for every preference relation, which we call its “loudness”. (Mathematically, a “loudness prior” can be seen as a probability distribution over preference relations, but this object does not have an epistemic interpretation.)

MIRI wants to fund your independently-organized Friendly AI workshop

 |   |  News

mirix_smallTo support Friendly AI research around the world, our new MIRIx program funds mathematicians, computer scientists, and formal philosophers to organize their own Friendly AI workshops.

A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together. Or it can be a multi-day research workshop pursuing a specific line of attack on a particular problem. It’s up to you.

Apply for funding here. In some cases we’ll be able to send a MIRI research fellow to your first meeting to give tutorials and answer questions, or perhaps they’ll Skype in to your workshop to do the same.