Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California (USC). He is a fellow of AAAI (Association for Advancement of Artificial Intelligence) and ACM (Association for Computing Machinery), as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, the INFORMS Wagner prize for excellence in Operations Research Practice, the Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa Foundation Faculty Research Award, RoboCup scientific challenge award, USC Associates Award for Creativity in Research and USC Viterbi School of Engineering use-inspired research award.
Prof. Tambe has contributed several foundational papers in agents and multiagent systems; this includes areas of multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the “influential paper award” from the International Foundation for Agents and Multiagent Systems (IFAAMAS), as well as with his research group, best paper awards at a number of premier Artificial Intelligence Conferences and workshops; these have included multiple best paper awards at the International Conference on Autonomous Agents and Multiagent Systems and International Conference on Intelligent Virtual Agents.
In addition, the “security games” framework and algorithms pioneered by Prof. Tambe and his research group are now deployed for real-world use by several agencies including the US Coast Guard, the US Federal Air Marshals service, the Transportation Security Administration, LAX Police and the LA Sheriff’s Department for security scheduling at a variety of US ports, airports and transportation infrastructure. This research has led to him and his students receiving the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District’s Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by the Los Angeles World Airports police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. Recently, he co-founded ARMORWAY, a company focused on risk mitigation and security resource optimization, where he serves on the board of directors. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.
Luke Muehlhauser: In Tambe et al. (2013), you and your co-authors give an overview of game theory in security applications, saying:
Game theory is well-suited to adversarial reasoning for security resource allocation and scheduling problems. Casting the problem as a Bayesian Stackelberg game, we have developed new algorithms for efficiently solving such games to provide randomized patrolling or inspection strategies.
You then give many examples of game-theoretic algorithms used for security at airports, borders, etc.
Is there evidence to suggest that the introduction of these systems has improved the security of the airports, borders, etc. relative to whatever security processes they were using before?
Milind Tambe: This is an important and wonderful question. There is a long answer to this question that compiles all of our evidence (this is actually a book chapter). I will attempt to provide a shorter answer here and will attempt to provide pointers to our publications that compile this evidence.
As you note, we are fortunate that many of our game-theoretic algorithms for security resource optimization (via optimal scheduling, allocation) have jumped out of our lab and are now deployed for real use in many applications. As we write papers about these deployed applications, evidence showing the benefits of these algorithms is definitely an important issue that is necessary for us to answer. Unlike our more “mainstream” papers, where we can run 1000s of careful simulations under controlled conditions, we cannot conduct such experiments in the real world with our deployed applications. Nor can we provide a proof of 100% security – there is no such thing. So what can we do? In our evidence gathering, we have focused on the question of: are we better off with our tools based on computational game theory than what was being done previously, which was typically relying on human schedulers or a simple dice roll for security scheduling (simple dice roll is often the other “automation” that is used or offered as an alternative to our methods). Now within my area of Artificial Intelligence, that AI programs can beat humans at complex scheduling tasks is not controversial; but regardless we have used the following methods to provide evidence that our game-theoretic algorithms indeed perform better. These methods range from simulations to actual field tests.
- Simulations (including using a “machine learning” attacker): We provide simulations of security schedules, e.g., randomized patrols, assignments, comparing our approach to earlier approaches based on techniques used by human schedulers. We have a machine learning based attacker who learns any patterns and then chooses to attack the facility being protected. Game-theoretic schedulers are seen to perform significantly better in providing higher levels of protections (Pita et al. 2008; Jain et al. 2010).
- Human adversaries in the lab: We have worked with a large number of human subjects and security experts (security officials) to have them get through randomized security schedules, where some are schedules generated by our algorithms, and some are baseline approaches for comparison. Human subjects are paid money based on the reward they collect by successfully intruding through our security schedules; again our game-theoretic schedulers perform significantly better (Pita et al. 2009).
- Actual security schedules before and after: For some security applications, we have data on how scheduling was done by humans (before our algorithms were deployed) and how schedules are generated after deployment of our algorithms. For measures of interest to security agencies, e.g., predictability in schedules, we can compare the actual human-generated schedules vs our algorithmic schedules. Again, game-theoretic schedulers are seen to perform significantly better by avoiding predictability and yet ensuring that more important targets are covered with higher frequency of patrols. Some of this data is published (Shieh et al. 2012).
- “Adversary” teams simulate attack: In some cases, security agencies have deployed adversary perspective teams or “mock attacker teams” that will attempt to conduct surveillance to plan attacks; this is done before and after our algorithms have been deployed to check which security deployments worked better. This was done by the US Coast Guard indicating that the game-theoretic scheduler provided higher levels of deterrence (Shieh et al. 2012).
- Real-time comparison: human vs algorithm: This is a test we ran on the metro trains in Los Angeles. For a day of patrol scheduling, we provided head-to-head comparison of human schedulers trying to schedule 90 officers on patrols vs an automated game-theoretic scheduler. External evaluators then provided an evaluation of these patrols; the evaluators did not know who had generated each of the schedules. The results show that while human schedulers required significant effort even for generating one schedule (almost a day), and the game-theoretic scheduler ran quickly, the external evaluators rated the game theoretic schedulers higher (with statistical significance). (Delle Fave et al. 2014a, Delle Fave et al. under consideration).
- Actual data from deployment: This is another test run on the metro trains in LA. We had a comparison of game-theoretic scheduler vs an alternative (in this case a uniform random scheduler augmented with real time human intelligence) to check fare evaders. In 21 days of patrols, the game-theoretic scheduler led to significantly higher numbers of fare evaders captured than the alternative. (Delle Fave et al. 2014a, Delle Fave et al. 2014b).
- Domain expert evaluation (internal and external): There have been of course significant numbers of evaluations done by domain experts comparing their own scheduling method with game theoretic schedulers and repeatedly the game theoretic schedulers have come out ahead. The fact that our software is now in use for several years at several different important airports, ports, air-traffic, and so on, is an indicator to us that the domain experts must consider this software of some value.
All of this evidence suggests that the game-theoretic approaches for security resource optimization performs significantly better than its competitors, i.e., human schedulers or simple randomization. Humans were seen to fall into predictable patterns; and indeed this is an extremely complex adversarial scheduling/planning/allocation task, where humans have to reason about trillions of possible schedules (actually even more), and which is time-consuming and very difficult for humans. It would seem then that we should let this complex task to be handed over to software, and let humans concentrate on the more important tasks of actually physically providing security.
Luke: Diana Spears mentioned that you teach a course on “Artificial Intelligence and Science Fiction,” which includes a section on your “research on Asimovian multiagents.” Is this 2007 syllabus still a pretty accurate outline of the course? What work on “Asimovian multiagents” do you discuss? She mentioned Schurr et al. (2006); is there any other work on that topic you’ve done?
Milind: The last iteration of “Understanding intelligent agents via science fiction” was taught in 2010. Here is the syllabus for it.
This course was jointly developed with my former PhD student, Prof. Emma Bowring, now Associate Professor at Univ of Pacific, and really was her idea to develop it while she was still a student at USC. We have written about this course (Bowring & Tambe 2009).
While we did use Asimov’s short stories extensively, the goal here is to really introduce core concepts in agents and multiagent systems, as can be seen from the syllabus. So the focus is more on using these stories, Star Trek clips, and other science fiction as a motivation to introduce core concepts in AI/Agents and Multiagent systems, starting from Markov Decision Problems (MDPs), POMDPs, Game theory, agent modeling and so on.
While you are right that we have done some follow up work based on Diana’s original paper and earlier paper by Weld & Etzioni (1994) and that is a fascinating thread of research, but that isn’t the focus of this course. This is more of an undergraduate course introducing students to key concepts. Towards the end of the course we get into abstract ideas on agent design where students may use some of the ideas advocated in these “Asimovian agents”, which is great.
We haven’t pursued that research direction beyond that paper and an earlier one (Pynadath and Tambe 2001).
It would be a great idea to push that direction more though.
Luke: In Tambe et al. (2013) you identify open research issues for game theory in security applications, such as scalability and robustness. Which open research problems in this area do you think would have the largest practical impact if they could be solved well?
Milind: Both scalability and robustness are important. Scale-up is important because we want to solve large-scale games. For example, even if we leave aside complex scheduling constraints and just think of the abstract assignment problem of security agencies such as Federal Air Marshals Service of assigning say 20 defenders to 1000 flights, that is 1000-choose-20. These problems become so large that we cant fit games in the normal form in memory and must somehow find an optimal defender strategy without explicitly representing the game in memory. On the other hand, we wish to handle robustness because of the many uncertainties in the game. There is uncertainty related to adversary’s surveillance (how much surveillance is actually going on), uncertainty about adversary’s payoffs, capabilities, and so on. When we combine the two requirements of solving large-scale games and handling uncertainty, then the problem becomes even more complex. These remain critical challenges for us to address.
There are however many other major research challenges that are open. Significant effort has been focused on modeling adversary bounded rationality. This exciting research at the intersection of algorithmic and behavioral game theory is a major area of research in security games. Furthermore, our recent work focused on protecting wildlife and fisheries (Yang et al. 2014; Haskell et al. 2014) brings up challenges related to Machine Learning. Specifically we now have data related to say poachers’ movements and actions. This data can be used to create a better model of the poachers. Another area is preference elicitation. If there is significant uncertainty related to many aspects of the domain, and reducing this uncertainty is going to take effort, then which features do we focus on first to reduce uncertainty? Our recent paper at AAAI’2014 (Nguyen et al. 2014) provides our initial thrust into this area.
In short, while we have made significant progress, not just in my group, but as the “security games community,” there is a lot more that still needs to be done.
Luke: Thanks, Milind!
Did you like this post? You may enjoy our other Conversations posts, including: