Paulo Tabuada on program synthesis for cyber-physical systems

 |   |  Conversations

Paulo Tabuada portrait Paulo Tabuada was born in Lisbon, Portugal, one year after the Carnation Revolution. He received his “Licenciatura” degree in Aerospace Engineering from Instituto Superior Tecnico, Lisbon, Portugal in 1998 and his Ph.D. degree in Electrical and Computer Engineering in 2002 from the Institute for Systems and Robotics, a private research institute associated with Instituto Superior Tecnico. Between January 2002 and July 2003 he was a postdoctoral researcher at the University of Pennsylvania. After spending three years at the University of Notre Dame, as an Assistant Professor, he joined the Electrical Engineering Department at the University of California, Los Angeles, where he established and directs the Cyber-Physical Systems Laboratory.

Paulo Tabuada’s contributions to cyber-physical systems have been recognized by multiple awards including the NSF CAREER award in 2005, the Donald P. Eckman award in 2009 and the George S. Axelby award in 2011. In 2009 he co-chaired the International Conference Hybrid Systems: Computation and Control (HSCC’09) and in 2012 he was program co-chair for the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys’12). He also served on the editorial board of the IEEE Embedded Systems Letters and the IEEE Transactions on Automatic Control. His latest book, on verification and control of hybrid systems, was published by Springer in 2009.

Luke Muehlhauser: In “Abstracting and Refining Robustness for Cyber-Physical Systems,” you and your co-author write that:

…we present a design methodology for robust cyber-physical systems (CPS) [which]… captures two intuitive aims of a robust design: bounded disturbances have bounded consequences and the effect of sporadic disturbances disappears as time progresses.

You use an “abstraction and refinement” procedure for this. How does an abstraction and refinement procedure work, in this context?


Paulo Tabuada: Cyber-physical systems are notoriously difficult to design and verify because of the complex interactions between the cyber and the physical components. Although control theorists have developed powerful techniques for designing and analyzing physical components, say described by differential equations, and computer scientists have developed powerful techniques for designing and analyzing cyber components, say described by finite-state models, these techniques are for the most part incompatible. The latter rely on discrete mathematics while the former rely on continuous mathematics. Our approach is based on replacing all the physical components by cyber abstractions so that all the remaining design and verification tasks can be done in the cyber world. The construction of these abstractions is based on rigorous numerical simulation combined with an analysis of the differential equation models to guarantee that the original physical components and its abstractions are equivalent up to a desired precision. Technically, “equivalent up to a desired precision” means approximately bisimilar and intuitively this means that both models generate the same set of behaviors up to a desired precision.


Luke: Last summer you gave a four-part tutorial (p1, p2, p3, p4) on program synthesis for cyber-physical systems. For someone who isn’t familiar with program synthesis, can you describe how it’s done in the context of cyber-physical systems, and give an example of such a system that has been implemented?


Paulo: Program synthesis is already a challenging problem for software only systems. In the context of cyber-physical systems it becomes even more challenging since the objective is to synthesize a program that will make a physical system behave as intended. I am currently working on two projects related to program synthesis. One of these has for objective the synthesization programs that control the behavior of bipedal robots. The very same techniques are being used in the same project to synthesize programs for adaptive cruise control and lane departure control systems in collaboration with Toyota and Ford. To give you an idea of where the challenge lies, let me recall how this project started. My colleagues Jessy Grizzle at UMich and Aaron Ames at Texas A&M came to me and shared their frustration with the programs they developed to control their robots. While they were very satisfied making their robots walk on flat and unobstructed surfaces, they faced large problem on uneven terrain. Their approach was to develop a set of rules, i.e., a reactive program that responds to the stimuli provided by the sensors by determining which actuators should be used and how. Although the rules were developed based on common sense, it soon became clear it was impossible to predict how the execution of these rules would impact the motion of the robot. Moreover, a small change in the rules would lead to completely different and unexpected behavior. Our approach is to construct a finite-state abstraction of the robot dynamics and then to synthesize a reactive program that forces this abstraction to satisfy a desired specification. The synthesis of these reactive programs is done via the solution of a two player game where the synthesized program plays against the robot’s environment and enforces the specification no matter which action the environment takes. In this way, rather than verifying a reactive program, we synthesize one that is guaranteed to be correct by construction.


Luke: Robots and other AI systems are becoming increasingly autonomous in operation: self-driving cars, robots that navigate disaster sitesHFT programs that trade stocks quickly enough to “flash crash” the market or nearly bankrupt a large equities trader, etc.

How might current AI safety methods (formal verification, program synthesis, simplex architectures, etc.) be scaled up to meet the safety challenges raised for highly autonomous systems operating in unknown, continuous, dynamic environments? Will our capacity to make systems more autonomous and capable outpace our capacity to achieve confident safety assurances for those systems?


Paulo: Indeed, that is what has been happening so far. Our ability to create large complex systems is far ahead of our understanding of the basic scientific principles ensuring their safe operation. In my opinion, synthesis based approaches have the best chance to scale to the level of existing applications. Formal verification of these systems is extremely hard since the design space is very very large. No formal verification technique can handle the wide variety of systems from such large design space. When using synthesis, however, we can reduce the design space and thus obtain much more structured designs for which formal guarantees can be obtained. What we loose, in turn, is the ability to find all the design solutions. This tradeoff is well understood in other areas of engineering, e.g., brick and mortar or wood framed building construction. I believe that it wont take long until we discover a few synthesis techniques that guarantee safety and correctness of highly autonomous systems, even if these result in somewhat more conservative solutions.


Luke: You write that “it wont take long until we discover a few synthesis techniques that guarantee safety and correctness of highly autonomous systems, even if these result in somewhat more conservative solutions.” Could you clarify what you have in mind? For example, what might a “conservative solution” look like for a high-assurance program synthesis solution for the software of a self-driving car? (Feel free to give a different example if you prefer.)


Paulo: By conservative I mean that we will have to restrict the synthesized software so that it falls within a class for which synthesis techniques are available. A simple example is software that can be described by timed-automata. Although it might be convenient to make a decision based on a generic function of several timers, to stay within the timed-automata class we are only allowed to compare timers against constants or compare differences between timers against constants. For a self-driving car, this could mean taking decisions based on delays exceeding thresholds but not being allowed to make decisions based on the average of several delays.


Luke: Thanks, Paulo!