André Platzer is an Assistant Professor of Computer Science at Carnegie Mellon University. He develops the logical foundations of cyber-physical systems to characterize their fundamental principles and to answer the question how we can trust a computer to control physical processes. He received an M.Sc. from the University of Karlsruhe (TH), Germany, in 2004 and a Ph.D. in Computer Science from the University of Oldenburg, Germany, in 2008, after which he joined CMU as an Assistant Professor.

Dr. Platzer received a number of awards for his research on logic of dynamical systems, cyber-physical systems, programming languages, and theorem proving, including an NSF CAREER Award and ACM Doctoral Dissertation Honoroable Mention Award. He was also named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI’s 10 to Watch by the IEEE Intelligent Systems magazine.

**Highlights** of Platzer’s thoughts, from the interview below:

- KeYmaera is a formal verification tool for cyber-physical systems. It has seen many real-world applications, e.g. verifying non-collision in the European Train Control System.
- KeYmaera’s capability comes from Platzer’s “differential dynamic logic,” which builds on earlier work in (e.g.) dynamic logic.
- Contra Levitt, Platzer isn’t convinced that autonomous robots will need to use decision theory, though they may.
- Platzer contrasts formal verification (“did I build the system right?”) with formal validation (“did I build the right system?”). One way to make progress on the latter is to develop what Rushby called “deep assumption tracking.”

**Luke Muehlhauser**: One of your projects is KeYmaera, a tool for deductive formal verification of hybrid systems, including autonomous control systems for trains, planes, and robots. What counts as a “hybrid system,” in this context? What commercial or government applications do you see as being within KeYmaera’s plausible capabilities reach, but for which KeYmaera isn’t yet being used?

**André Platzer**: KeYmaera is a tool for verifying hybrid systems. A hybrid system — or, more precisely, a hybrid dynamical system — is one in which both discrete dynamics and continuous dynamics interact. In other words, part of the system exhibits a discrete behavior, often caused by computation or communication, and other parts of the system exhibit a continuous behavior, such as physical motion in space or continuous chemical reactions. Systems in classical engineering have predominantly continuous dynamics while computer science classically explores only discrete dynamics, because, after all, computers compute one thing after another and are, thus, inherently discrete. Communication is a similarly discrete phenomenon where information is exchanged once at a certain point in time and then again later on at some other point in time. For example, over a wireless channel, communication does not literally happen continuously all the time, only every once in a while. However fast it might appear to the innocent bystander.

Hybrid systems now start with the realization that exciting things happen as soon as you make discrete dynamics and continuous dynamics interact. All of a sudden, your systems can, for example, use discrete computers and the power of advanced software algorithms to control and influence continuous physical processes and make them safer.

But we can no longer quite understand the discrete and continuous part of the system in isolation, because they interact. It would not be enough to worry just about whether the software could crash but also about whether it feeds the right control input into the physical system that it is trying to control and what effect those inputs will have on the physics. Likewise, it is not sufficient to understand the continuous dynamics alone, because its effect depends on the control input generated by the computer program.

This is where hybrid systems start and where KeYmaera comes in. Hybrid systems, which are a common core mathematical model behind cyber-physical systems (CPS), bring exciting new capabilities of how we can change the world for the better. But these capabilities come hand in hand with our responsibility of making sure that the systems are not going to let us down when we badly need them.

As Jeannette Wing has aptly phrased: “How can we provide people with cyber-physical systems they can bet their lives on?”

Application areas include aviation, automotive, railway, robotics, medical devices, physical or chemical process control, factory automation, and many more areas than I could list. Hybrid systems are a very general mathematical model which makes them interesting for many possible application areas. KeYmaera has indeed been used with industrial partners from the automotive industry, on mobile robotics applications of an industrial partner, in a study of a surgical robotics system by a government lab, and we are currently working with the Federal Aviation Authority on how to use KeYmaera for the next generation Airborne Collision Avoidance System ACAS X. Having said that, all those application areas bring up interesting challenges. Since most software in the wild still contains bugs, there is more potential and there remain more challenges in all the other application areas as well.

**Luke**: What role does differential dynamic logic play in KeYmaera? What was your inspiration for developing it?

**André**: *Differential dynamic logic* is the theoretical underpinning and mechanism behind KeYmaera. It is the principle that makes the verification power of KeYmaera work. Differential dynamic logic is the logic in which you can specify and verify properties of hybrid systems. The logic integrates all the relevant features of hybrid systems in a single language: differential equations for describing their continuous dynamics, difference equations / discrete assignments for discrete dynamics and conditional switching, nondeterministic choices, as well as repetition. With these fundamental dynamical principles, all the other relevant ones can be expressed.

I designed differential dynamic logic when I was starting my PhD. I was always very curious about how the core logical foundations of hybrid systems would look and simply could not wait to discover them. Especially since they beautifully brought together the discrete and continuous parts of mathematics, which are classically considered separately.

But then someone showed me a model of the *European Train Control System* (ETCS) with critical parameters (such as when to start braking) that were chosen more or less at random based on their performance in a few simulations. Suddenly I saw that there was also a real need in practice for the flexibility that differential dynamic logic provides. There simply had to be a better and more disciplined way of approaching the problem of how to design hybrid systems and safely choose their respective parameters. Any specific number for a crucial design parameter can obviously only be safe within a range of constraints on all the other parameters of the system. And, indeed, the chosen parameter value for the start braking point in the ETCS model turned out to cause a collision when just increasing the speed a little.

From this point on, I designed differential dynamic logic with a dual focus on both its core theory and its practical use in applications. This broader focus certainly made it more successful. Especially when practical applications raised interesting theoretical challenges and, conversely, questions considered for purely theoretical reasons later turned out to be critical in unsuspecting applications.

Differential dynamic logic is what we later used as the basis for implementing the theorem prover KeYmaera and has served us well since. Much has been written about differential dynamic logic since my first draft in 2005, including a book. But a good survey can be found in a LICS tutorial.

**Luke**: What previous work in logic and differential equations did you build on to create differential dynamic logic?

**André**: Differential dynamic logic fortunately had all of science to build on. An exhaustive list of all the areas of logic, mathematics, and computer science that its development and theory are drawing from would be longer than the whole interview. And the list with connections to control and engineering would be just as long. In fact, this is one of the reasons why research on hybrid systems is so exciting. It is certainly a way of being able to explore any of your favorite areas of mathematics while still having important applications that other people care about.

Fortunately, KeYmaera gives you all those powerful connections to other areas of science & engineering, but they are hidden behind the very elegant and intuitive language of differential dynamic logic. That means the user does not need to know all those areas of math and can play with the syntactical constructs of differential dynamic logic and its proof rules. Differential dynamic logic gives people a safe framework in which they can leverage and exploit all these areas without knowing all of them in depth. The computer can then check that the reasoning is combined in a correct way.

The most central previous work I built on is dynamic logic originally due to Pratt and pioneered by many people including Harel, Meyer, Parikh, Kozen as well as on Sophus Lie’s theory of continuous algebras and continuous groups. Both of those inventions are from before I was born, but have helped me a lot in designing differential dynamic logic.

Dynamic logic is a logic for conventional discrete programs that I hybridized by augmenting it with differential equations and generalizing it to hybrid systems with their interacting discrete and continuous dynamics. The original dynamic logic has also been implemented and used successfully in the prover KeY on which our prover KeYmaera is based. This heritage also explains the funny name KeYmaera, which makes independent sense, as KeYmaera is a homophone to Chimaera, the hybrid animal from ancient Greek mythology.

In retrospect, it is obvious that dynamic logic has really been longing for hybrid systems for three decades without anybody noticing. Dynamic logic was originally formulated for nondeterministic systems in which programs can have more than one possible effect. This is a fundamental characteristic but really slightly at odds with the behavior of conventional programs, which is mostly deterministic, i.e. each program statement will have exactly one effect. Well in hybrid systems, the story is different, because CPS implementations are never as deterministic as you might think they would run, and because the environment always has more than one possible future behavior that you need to consider. Dynamic logic and dynamical systems are simply made for each other.

Even more fundamental than dynamic logic itself are the general fundamental principles of logic. They led me to the realization that hybrid systems deserve a study of their logical characteristics. Hybrid systems used to be second class citizens because they were challenging and no one had created a logical foundation for them. Now that those proper logical foundations are in place, we see that hybrid systems and their programmatic expressions as hybrid programs are not just some peripheral artifact, but are actually a powerful construct in their own right. They possess the intrinsic characteristics of integrating interacting discrete and continuous behavior. The result of such an interaction may be a little bit like what happens when particles of fundamentally different origin collide in a high-energy physics collider experiment. Historically disjoint areas of mathematics & science suddenly being to interact. Really surprising interactions happen and scientific bridges are being built that unite discrete and continuous computation by way of the mediating nature of hybrid dynamics.

Another example illustrating the consequences of the impact of logic on hybrid systems is that proof theory led to a generalization of Lie’s seminal results in ways that relate to Gentzen’s seminal cut elimination in logic. Cuts are the logically fundamental mechanism behind lemmas in proofs and have been shown by Gentzen to be unnecessary. They are rather discrete, though, because cuts prove a (static) lemma and then use it. The counterpart for discrete cuts is called differential cuts which prove a property of a differential equation and then modify the system behavior to comply with this property. Differential cuts turn out to be fundamental and, for all I know, Sophus Lie does not seem to have been aware of the significance of such a logical mechanism for differential equations. Phenomena like this one demonstrate that amazing science can happen when logicians like Gerhard Gentzen interact with algebraists like Sophus Lie.

Yet, at the end of the day with all those mind-bending theoretical surprises, it is certainly also reassuring to observe their practical effect and relevance in verification and analysis results for CPS applications that have not been possible before. It is simply a nice mix to have exciting theory with useful applications at your fingertips every day.

**Luke**: Do you think the creators of dynamic logic, or the creators of other theoretical work that you built on to develop differential dynamic logic, thought that *their* work could be useful for program safety or verification one day? Or is this an instance of you plucking up theoretical results from unrelated disciplines and using them for your own purposes in software verification?

**André**: Well, these are all very smart people. I find it hard to imagine the pioneers of dynamic logic would not have thought of possible uses in program verification. Even if dynamic logic might have started from more theoretical / logical investigations, I am sure they were aware of this future. And I am not the first to use dynamic logic for verification either. But it is true that, unlike in the case of differential dynamic logic, the development of the theory, practice, and applications of dynamic logic was mostly led by different groups. Maybe it is just part of the natural appeal of hybrid systems that I can never resist to be as curious about their theory as I am about their applications.

Having said that, an excellent question is whether the inventors of dynamic logic have thought of hybrid systems. I, of course, don’t know that. Dynamic logic predates the invention of hybrid systems by almost two decades. But hybrid systems have also almost always been around us. It’s just that until the 1990s, nobody noticed that they are a fascinating mathematical phenomenon in their own right.

Concerning the other primary source of inspiration, the seminal work of Sophus Lie, he might not have thought of possible uses in the verification of hybrid systems, because Sophus Lie’s results predate dynamic logic by a century. He invented what are now called Lie groups 1867-1873, which is long before computers were around. Yet, I do not want to leave the impression these were my only inspirations when developing differential dynamic logic and its theory and applications. There is significant influence also from ideas of Kurt Gödel, Gerhard Gentzen, Alfred Tarski, David Hilbert, Alonzo Church, Yiannis Moschovakis, Henri Poincaré, Joseph Doob, Eugene Dynkin, and many many others. I most certainly owe my results to what they have found out before.

**Luke**: Levitt (1999) argued that as intelligent systems become increasingly autonomous, and operate in increasingly non-deterministic, unknown environments, it will not be possible to model their environment and possible behaviors entirely with tools like control theory:

If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable… events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous [percept].

Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with

a priorimodels of all the objects and dynamical events that might occur.In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.

Thus, he argues, autonomous programs will eventually need to be decision-theoretic.

What do you think of that prediction? And in general, what tools and methods do you think will be most relevant for achieving high safety assurances for the highly autonomous systems of the future?

**André**: Levitt definitely has a point about the necessity of stronger safety arguments for robots, especially in light of their challenging and uncertain environments.

I would, however, not advise dismissing control theory entirely. I do not even see how decision theory would contradict control theory. Control theory is, without any doubt, contributing valuable technology and perspectives on cyber-physical systems, including robots. The spirit of cyber-physical systems and their mathematical model of hybrid systems is to learn from *all* the various disciplines involved and bring concepts together rather than segregating them.

While I sympathize with Levitt’s conclusion about the possible use of decision-theoretical methods, I disagree with his argument. The reason why I do is a nice illustration of important principles behind cyber-physical systems. In the text you quoted and other places in his article, Levitt argues that control theory does not help because it requires models. He further argues that the solution would be to use decision theory, which, however, still needs models:

“For each step in a plan to accomplish a robot task, […] we can model the robot’s immediate possible action choices as decisions. Then we can develop

a priorisets of values for the possible task-relevant outcomes of robot actions.”

In fact, this needs even more models: prior belief distribution models, models how perception updates beliefs, models of the cost of taking an action in a state.

It is inconclusive to argue that the cure to using models is to use more models, even if this argument occurs frequently. At the same time, it is hard to escape the Cartesian Demon without adopting at least some model of the world. As long as we acknowledge that our predictions are subject to some form of models, however hidden or data-driven they may be, there is, thus, nothing wrong in using models. And in fact, there is even a good reason to consider increasingly more flexible models.

Regardless, I agree with Levitt that robots live in an uncertain environment and, uncertainty, thus, needs to be included in models. Simple models are sufficient for understanding some phenomena in a system, while other aspects need more complicated models. But they are still models! Whether the right model is deterministic, probabilistic, nondeterministic, or adversarial depends on the exact purpose. In some situations, nondeterministic understandings of uncertainty are much better, in others, they have to be probabilistic, in yet others adversarial.

This is one of the central guiding principles behind multi-dynamical systems, which I define as systems that combine multiple dynamical aspects such as discrete, continuous, stochastic, nondeterministic, or adversarial dynamics. Multi-dynamical systems generalize hybrid systems and provide the dynamical features of cyber-physical systems that hybrid systems are missing. Differential dynamic logic generalizes very elegantly to dynamic logics of multi-dynamical systems.

This kind of flexibility is crucial for success in high assurance safety systems. It is simply a pity if all models, analysis results, and techniques need to be discarded entirely as soon as a dynamic aspect occurs that the techniques do not handle. If, instead, models and analysis techniques are as flexible as the family of differential dynamic logics, then there is no overwhelming cost associated with introducing or dropping nondeterminism or other dynamical aspects.

My colleagues Bruce Krogh, David Garlan and our students have argued for such flexibility of models as well, where the insight is that it is often more feasible to consider multiple heterogeneous models rather than a single all-embracing one.

Another frequent but faulty argument that Levitt expresses is

“In fact, we claim that if robot perceptions were certain, that we would have almost no uncertainty associated to robot behaviors.”

The argument “if only we could sense, we could act” is a common fallacy. Its contrapositive is quite true: If we cannot sense, it is virtually impossible to control anything safely. Yet, even if the sensors happen to work perfectly, there are still numerous opportunities for subtle and blatant mistakes in controllers. And there are even more opportunities for mistakes coming from the uncertainty of the effects of actions that Levitt readily dismisses. For emphasis, I’ll focus only on discussing controller mistakes that are independent of sensing and actuation uncertainty.

For example, our verification tool KeYmaera led Yanni Kouskoulas, one of my collaborators at the John’s Hopkins University’s Applied Physics Lab (JHU APL), to find and fix two very subtle flaws in the controllers of a surgical robotics system for skull-base surgery. These controllers had been developed according to the best practice state of the art in safety-critical systems and had been scrutinized by their developers. Yet, the developers had second thoughts and approached the JHU APL, who ultimately approached us. The fact that the bugs have only been found by formal verification in the end demonstrates that it is just extraordinarily difficult to get cyber-physical systems correct without the right analytic tools. But I am not surprised that it took sound verification tools to find the bugs. One was a timing bug caused by the inertia of the system. The other one had to do with very subtle acting and counteracting control depending on the precise geometrical relationships.

This is not an isolated case. We have observed similar benefits of using verification in our DARPA HACMS project. It was an interesting experience to observe the discrepancies that can arise when comparing formally verified parts of the robot to non-verified parts. That is always an eye-opening experience where all participants learn something important about the application domain.

Levitt underestimates substantial challenges:

“In summary we claim that valuing world states is computationally tractable in two senses. The first is that we can specify the values about world states

a prioriand use them to dynamically value actions during task execution. The second is that the number of such valuations is order linear in a relatively small number of abstract classes of world entities that concern a robotic task, such as roads, traffic signals, humans and animals.”

But the complexity of decision making is quite a problem in practice. Even seemingly simple planning instances are NP-hard, which Bellman identified as the curse of dimensionality in the 1950s. Correct decisions that come half an hour too late are not particularly useful for rescuing a robot’s safety mission. Real-time responses are critical. That’s one of the reasons why it is crucial to have pre-verified safe responses as opposed to settling on unbounded online deliberation.

In contrast to what the above quote indicates, it has been an open challenge for decades to find out how to get the models right, get the values right, and make sure both translate into the right action to choose.

Consider ACAS X, the next generation Airborne Collision Avoidance System that the FAA tasked our collaborators at JHU APL and us to analyze with our verification techniques. Its design is built on decision theory in the form of Markov Decision Process optimization. On some level, it might sound trivial to identify that “no collisions ever” is the highest value in designing a collision avoidance system for aircraft. Yet, how exactly does this translate into math? What are the right secondary objectives that prevent unnecessary avoidance maneuvers? What is the right model to assume for an intruder aircraft? For pilot’s responses? After many years of careful and highly skilled design of ACAS X, I have still seen an “optimal” interim version that is just not what anybody would want. And that, fortunately, will not fly.

Overall, I would argue that verification ultimately reduces correctness of cyber-physical systems to the question of whether we got the model right. It is a true challenge to identify the right model for a problem. But once we did, there are still numerous subtle ways of getting the answer wrong. These are the flaws that verification removes. Having said that, verification is also making progress on validating whether we got the right model in the first place.

For the record, I also strongly disagree with other claims Levitt made in that position paper. Especially:

“We claim that the core capability that makes ethical behaviors possible for robots is to understand the world external to the robot through the automated interpretation of sensory data. Robot perceptions have a fundamentally different relationship to the truth or falsity of statements about the world than is possessed by syntactic rules or similar representations.”

This claim directly contradicts Alonzo Church’s seminal proofs from 1936. And my primary line of research on logic and proofs for cyber-physical systems shows exactly how such relationships can be established and successfully exploited for verification. Thus, I understand Levitt’s claims more as possible suggestions rather than conclusive arguments. That might be a logicians trait, though, to distinguish theorems from conjectures by whether or not they come with proof.

**Luke**: You wrote that “verification is also making progress on validating whether we got the right model in the first place.” What did you have in mind, there? The thing that came to mind for me was Rushby (2013), but I don’t know much about concrete ongoing progress on the problem of reducing (what he calls) “epistemic doubt.”

**André**: John Rushby makes a great case for distinguishing logic doubt (“did we get the arguments right?”) versus epistemic doubt (“did we get our knowledge of the world right?”). I agree with him that we can get logic doubt under control and make it a problem of the past. Yet, that is exactly what you need such strong sound reasoning frameworks for cyber-physical systems for, such as differential dynamic logic. The ultimate proof will simply have to take the dynamics of the system into account just as much as it takes its control into account.

Terminologically, I usually contrast formal verification (“did I build the system right?”) with formal validation (“did I build the right system?”). That is primarily because verification versus validation has a widespread (but not unambiguous) compatible connotation in the field. And also because I want to emphasize that we should not settle only for arguments that Rushby aptly phrases as:

“Because an expert says so.”

Yet, it doesn’t matter what name we use, it’s still our responsibility to make sure the system really works in the end.

Elements of formal validation are also used in our NSF Expedition project CMACS (Computational Modeling and Analysis for Complex Systems), led by my colleague Ed Clarke Among other things, we also probe models for plausibility by subjecting them to formal verification tests of whether or not they satisfy expected properties. These are not only the safety-critical properties in the end, but also rather mundane properties whose only purpose is to validate whether the model behaves as expected. Certainly a technique that has been successful also in formal methods in chip designs.

To follow up on Rushby’s suggestions, I completely agree with his plea for what I would call deep assumption tracking. That is, making sure all assumptions and conclusions are kept track of meticulously. Again an important topic in verification with multiple heterogeneous models. It’s a common reason for minor and major hiccups if someone get’s the assumptions wrong. Or if no one knows about them, so that other people draw the wrong conclusions. That’s also why inadequate modeling assumptions that are left implicit tend to have the most dire consequences on a system’s behavior.

In our work on provably safe obstacle avoidance for autonomous robotic ground vehicles, for example, our models went by subsequently enriching the dynamic capabilities of the obstacles from static over dynamic all the way to dynamic obstacles in the presence of sensor and actuator and location uncertainty. Since we accompanied each of those models with a correctness proof in KeYmaera, we couldn’t help but notice under which circumstance the system is safe and when it’s not. In our models, for example, it was a simple change to go from “obstacles are stuck” to “obstacles can move wherever they want, nondeterministically”. Since the proof won’t work then, though, you immediately realize that you first need to know some bound on the obstacle’s velocity. Dynamic obstacles coming at you faster than the speed of light are simply hard to defend against.

Later, during a dry-run of a different controller implementation built without verification, it was interesting to see the difference in action. The unverified robot controller sometimes collided with obstacles, which is exactly what it wasn’t supposed to do. A corresponding (simulation) run of the verified system instead vetoed unsafe control actions long before the collision ever happened. The reason why was prototypical. Whoever designed the unverified controller that we had access to did the natural thing: the robot first considered all obstacles static and hoped for fast enough sensing. Well, that is what our first verified model had also assumed. But our KeYmaera proofs immediately went further and investigated the case of dynamic obstacles, which is why we recommended to consider the case of dynamic obstacles for the unverified controller, too.

The unverified robot still collided. This time, the reason was more subtle. In a nutshell, the implementations silently assumed that walls would not move while other agents could. That sounds reasonably justified by everyday’s experience. The catch is that corners do move. Well, they don’t really. But as soon as your verification model says that obstacles either move or they don’t and that it’s their choice (nondeterministically!) where they move to, then you cannot help but notice in your proof that the system is only safe if your controller correctly addresses the case where an obstacle suddenly comes racing round a corner. And indeed, when we asked, we got the confirmation that troublemaker cases in practice were partially occluded obstacles.

Now it might be easy to blame the developers of the original robot controllers for this oversight and say that they just haven’t been paying enough attention. But that would be too easy. In fact, that would miss the point, entirely. The point is that robotics and any other kind of cyber-physical systems design is just darn challenging. These are very hard problems. Programming is incredibly difficult. And programming robots only makes things more difficult, because we’ve suddenly got to face the whole physical reality around us. There’s more ways to get them wrong, quite subtly, than there are ways of getting them right. We desperately need analytic support that helps us getting the designs right so that we can bet our lives on them.

Having talked to a number of people in academia and industry later on, it was not apparent to most of them that some static parts of the ambient space would have to be considered as moving even if they themselves didn’t. This is a mistake that verification prevented us from doing.

But guess how we found out. A month earlier, postdocs in my group tried our freshly verified controller on some examples and came back quite frustrated, because it was too conservative for their taste. It shied away from narrow doorways. I was probably baffled for a moment there myself. But then it dawned on us that our verified controller was right. If obstacles can move anywhere they please, they might as well move quickly from behind a corner where our robot could not see them yet. Notice that we never built this knowledge into our model nor our controller nor anything else. We simply started with the canonical model of a robot’s motion on a plane with a moving obstacle. Then we discovered where our proofs led us to in designing a verifiably safe controller. They led us to a controller that stays clear of narrow doorways. But that is what the robot simply has to do to stay provably collision-free.

We later discussed our verified controller with our collaborators in industry. They not only confirmed that our controller works within the intended operational range and wasn’t overly conservative at all. But they also simply loved the sudden new capability of playing with different instantiations of the parameters of our verified model to find out whether the design will work correctly efficiently within their intended operating regime. Without having to set up all those different scenarios and system configurations experimentally. And the verified controller automagically ends up doing the right thing. When cutting corners, it will slow down to avoid collisions. When moving with a bigger margin around a corner, it would happily speed up. We didn’t build that in. It just followed from the safety constraints of our proofs.

Still, we might have made mistakes in our model of the motion principles of the world and our assumptions about the environment. That would have given us a true, undeniable proof, which just does not apply to reality, because its assumptions aren’t met. Or we might have misformalized what safety property it is that we care about. And this is where formal validation comes in. Techniques for ensuring that the models fit to the relevant part of reality and that the formulas fit to the intent. A problem that we just made some promising progress on and where I’m sure that many more exciting things will still be found out about.

Formal validation is just as exciting as formal verification. And that is why I am developing the Logical Foundations of Cyber-Physical Systems, because we simply need such strong analytic foundations when software reaches out into our physical world. Yet, with those foundations, there are suddenly amazing opportunities for how we can change the world for the better by designing cyber-physical systems that help us in our daily lives in more ways than we ever dreamed off. After all, true cyber-physical systems combine cyber capabilities with physical capabilities to solve problems that neither part could solve alone.

**Luke**: Thanks, André!

**Did you like this post?** You may enjoy our other Conversations posts, including: