New report: “Non-omniscience, probabilistic inference, and metamathematics”

 |   |  Papers

Non-OmniscienceUC Berkeley student and MIRI research associate Paul Christiano has released a new report: “Non-omniscience, probabilistic inference, and metamathematics.”

Abstract:

We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.

Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.

What is the relation between this new report and Christiano et al.’s earlier “Definability of truth in probabilistic logic” report, discussed by John Baez here? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.

Roger Schell on long-term computer security research

 |   |  Conversations

Roger R. Schell portrait Roger R. Schell is a Professor of Engineering Practice at the University Of Southern California Viterbi School Of Engineering, and a member of the founding faculty for their Masters of Cyber Security degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of Aesec Corporation, a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “Class A1”) commercial product, the Gemini Multiprocessing Secure Operating System (GEMSOS). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the Trusted Computer System Evaluation Criteria (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the National Cyber Security Hall of Fame.

Read more »

New chapter in Cambridge Handbook of Artificial Intelligence

 |   |  Papers

cambridge handbook of AIThe Cambridge Handbook of Artificial Intelligence has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF here.

The abstract reads:

The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.

Our mid-2014 strategic plan

 |   |  MIRI Strategy

Summary

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:

  • Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
  • Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
  • Increase our investment in our Friendly AI (FAI) technical research agenda.

The reasons for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:

  • We produced much Friendly AI research progress on many different fronts, and do not remotely feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.
  • FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.
  • We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).

In short, I think we tested and validated MIRI’s new strategic focus, and now it is time to scale. Thus, our top goals for the next 6-12 months are to:

  1. Produce more Friendly AI research.
  2. Recruit more Friendly AI researchers.
  3. Fundraise heavily to support those activities.

Read more »

New report: “Distributions allowing tiling of staged subjective EU maximizers”

 |   |  Papers

Distributions allowing reportMIRI has released a new technical report by Eliezer Yudkowsky, “Distributions allowing tiling of staged subjective EU maximizers,” which summarizes some work done at MIRI’s May 2014 workshop.

Abstract:

We consider expected utility maximizers making a staged series of sequential choices, and replacing themselves with successors on each time-step (to represent self-modification). We wanted to find conditions under which we could show that a staged expected utility maximizer would replace itself with another staged EU maximizer (representing stability of this decision criterion under self-modification). We analyzed one candidate condition and found that the “Optimizer’s Curse” implied that maximization at each stage was not actually optimal. To avoid this, we generated an extremely artificial function η that should allow expected utility maximizers to tile. We’re still looking for the exact necessary and sufficient condition.

Allan Friedman on cybersecurity and cyberwar

 |   |  Conversations

Allan FriedmanMIRI recently interviewed Allan Friedman, co-author of Cybersecurity and Cyberwar: What Everyone Needs to Know.

We interviewed Dr. Friedman about cyberwar because the regulatory and social issues raised by the prospect of cyberwar may overlap substantially with those that will be raised by the prospect of advanced autonomous AI systems, such as those studied by MIRI.

Our GiveWell-style notes on this conversation are available in PDF format here.

MIRI’s June 2014 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends,

The SV Gives fundraiser was a big success for many organizations, and especially for MIRI. Thanks so much, everyone!

Research Updates

News Updates

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

Milind Tambe on game theory in security applications

 |   |  Conversations

Milind Tambe portraitMilind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California (USC). He is a fellow of AAAI (Association for Advancement of Artificial Intelligence) and ACM (Association for Computing Machinery), as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, the INFORMS Wagner prize for excellence in Operations Research Practice, the Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa Foundation Faculty Research Award, RoboCup scientific challenge award, USC Associates Award for Creativity in Research and USC Viterbi School of Engineering use-inspired research award.

Prof. Tambe has contributed several foundational papers in agents and multiagent systems; this includes areas of multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the “influential paper award” from the International Foundation for Agents and Multiagent Systems (IFAAMAS), as well as with his research group, best paper awards at a number of premier Artificial Intelligence Conferences and workshops; these have included multiple best paper awards at the International Conference on Autonomous Agents and Multiagent Systems and International Conference on Intelligent Virtual Agents.

In addition, the “security games” framework and algorithms pioneered by Prof. Tambe and his research group are now deployed for real-world use by several agencies including the US Coast Guard, the US Federal Air Marshals service, the Transportation Security Administration, LAX Police and the LA Sheriff’s Department for security scheduling at a variety of US ports, airports and transportation infrastructure. This research has led to him and his students receiving the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District’s Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by the Los Angeles World Airports police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. Recently, he co-founded ARMORWAY, a company focused on risk mitigation and security resource optimization, where he serves on the board of directors. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.

Luke Muehlhauser: In Tambe et al. (2013), you and your co-authors give an overview of game theory in security applications, saying:

Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies.

You then give many examples of game-theoretic algorithms used for security at airports, borders, etc.

Is there evidence to suggest that the introduction of these systems has improved the security of the airports, borders, etc. relative to whatever security processes they were using before?

Read more »