Wolf Kohn on hybrid systems control

 |   |  Conversations

Wolf Kohn portrait Dr. Wolf Kohn is the Chief Scientist at Atigeo, LLC, and a Research Professor in Industrial and Systems Engineering at the University of Washington. He is the founder and co-founder of two successful start-up companies: Clearsight Systems, Corp., and Kohn-Nerode, Inc. Both companies explore applications in the areas of advanced optimal control, rule-based optimization, and quantum hybrid control applied to enterprise problems and nano-material shaping control. Prof. Kohn, with Prof. Nerode of Cornell, established theories and algorithms that initiated the field of hybrid systems. Prof. Kohn has a Ph.D. in Electrical Engineering and Computer Science from MIT, at the Laboratory of Information and Decision Systems. Dr. Kohn is the author or coauthor of over 100 referred papes, 6 book chapters and with Nerode and Zabinsky has written a book in Distributed Cooperative inferencing. Dr. Kohn Holds 10 US and international patents.

Luke Muehlhauser: You co-founded the field of hybrid systems control with Anil Nerode. Anil gave his impressions of the seminal 1990 Pacifica meeting here. What were your own impressions of how that meeting developed? Is there anything in particular you’d like to add to Anil’s account?


Wolf Kohn: The discussion on the first day of the conference centered on the problem of how to incorporate heterogeneous descriptions of complex dynamical systems into a common representation for designing large scale automation. What came almost immediately were observations from Colonel Mettala and others that established as a goal the finding of alternatives to classic approaches based on combining expert systems with conventional control and system identification techniques.

These approaches did not lead to robust designs. More important, they did not lead to a theory for the systematic treatment of the systems DOD was deploying at the time. I was working on control architectures based on constraints defined by rules, so after intense discussions among the participants Nerode and I moved to a corner and came up with a proposal to amalgamate models by extending the concepts of automata theory and optimal control to characterize the evolution of complex dynamical systems in a manifold in which the topology was defined by rules of operation, and behavior constraints and trajectories were generated by variational methods. This was the beginning of what we would be defined later on as “hybrid systems.”


Luke: Which commercial or governmental projects would you name as being among the most significant success stories of the hybrid systems research program, from 1990 to the present day?


Wolf: There are many applications today that use hybrid systems as the basic technology. These are a few of the ones I am I personally familiar with:

  • A demand forecaster and an inventory control and management system being deployed by the Microsoft Dynamics group.
  • A battlefield simulator deployed by the Army’s Piccatiny Arsenal.
  • A generic people and resource scheduling system deployed by Clearsight Systems.
  • A cooperative distributed inference system deployed by Atigeo with applications on medical informatics and smart electric power network management systems.
  • A quantum hybrid control system for capturing and storing sunlight being prototyped by Kohn-Nerode LLC.

Luke: What are the new theoretical developments in hybrid systems of the past 10 years that are most impressive or interesting to you? What kinds of advances do you think we might see in the next 10 years?


Wolf: For me the most important advances in hybrid systems are in three areas:

  1. Representation: We have found that the behavior of dynamical systems characterized by multiple heterogeneous models can be effectively characterized by Hamiltonian functions. The interaction with multiple models and the transfer of information from one model to another is defined by the interaction of Hamiltonian forms. This fact allows hybrid systems to be a preferred theory and implementation technology for the development of new control approaches such as mean field agent-based distributed control and gauge theory and most importantly, control and optimization specified and implemented by rules.
  2. Hybrid systems control design, which allows for the specification of architectural structures and the control requirements as part of the formulation of a control problem. This fuses the physical and empirical data about the dynamic process to be controlled. The control specification and the computational requirements are achieved because control performance specifications and architectural constraints are models of the controlled process.
  3. Agent-based distributed control: Nerode and I, both together and separately have developed a variational theory, based on hybrid systems, for dynamic synchronization of multiple agents participating in the control of a distributed process. The variational theory generalizes a principle in network theory, called Tellegen’s Theorem. The theory provides for active synchronization with no umpire for network interaction between the agents. A version of this theory has been used to implement an agent-based architecture for control, uncertainty management, and learning in several applications. This architecture is known by the acronym MAHCA (Multiple Agent Hybrid Control Architecture).
  4. Metacontrol: This is a theory for controlling the performance and behavior of implemented algorithms. The dynamics in this case is a computational multitask process. The objective is to make it run faster with less memory and with active synchronization between tasks. We are developing a computational hybrid systems theory based on metacontrol. Preliminary implementations of this theory on optimization algorithms has shown very promising results in terms of reduction of compute time, real time synchronization and rule based optimization.

Luke: I’m particularly interested in the safety challenges presented by the increasingly autonomous AI systems of the future. Self-driving cars are on their way, the U.S. military is working toward autonomous battlefield robots of various types, etc. Do you think hybrid systems control, and relatively modest extensions to it, will be sufficient to gain high assurances of the safety for the more-capable autonomous systems we’re likely to have in 10 or 20 years, or do you think other contemporary control and verification approaches have a better shot at addressing that problem, or do you think entirely new approaches will need to be developed?


Wolf: I will break down my answer in two parts: (1) the autonomy issue and (2) the verification issue.

Autonomy: This was one of the central questions brought up by the initial sponsors of hybrid systems (ARPA, NIST, ARO, SAP). Our answer early on came with the following proposition which we implemented in a battlefield dynamics simulator: given a process to be controlled, build a model of desired behavior based on the performance specifications, regulations, and economic operation, and construct the controlled process dynamics. This is a hybrid system; let’s call it S.

Then we build a model, another hybrid system, say C, representing specified safety rules and constraints and hybridize S and C to produce a new Hybrid control system S1. This approach is only successful in semi-closed, quasi-stationary systems.

For the large scale autonomous systems, one needs to allow the hybrid system controller to detect, learn and dispose of un-programmed situations. To do this, a class of hybrid controllers (known as agents in our papers) contain directives that implement Learning from Sensory Data. One example is what we might call “Learning by Failure Predictor and Repair” systems. This particular system operates as follows: a safety agent (or agents) is (are) monitoring the controlled system operation, and infer(s) whether it is operating in a feasible region and what is the likelihood that it is going to leave the feasibility region (a Failure) in the near future (“near” is a concept depending on the controlled system). This likelihood determines the response to the failure: a Repair operation.

Then, another agent application designs a controller that implements the Repair operation. Note this is not necessarily an adaptation of an existing controller. We use the fact that the design procedure of hybrid controllers is itself a hybrid system defining the synthesis approach. We term the resulting design procedure a repair hybrid control. We have used this approach in a prototype microgrid management and control system that is near deployment with considerable success: i.e., high robustness, resiliency, recovery: all this while maintaining good performance.

Another element that agent-based hybrid systems provide is the ability to improve safety via redundancy.

In short, my answer to the first part of the question may be summarized as follows:

Hybrid systems is a first principles platform that allows for the incorporation of safety, learning and repair. What I believe the research in this area should be focused on is how to populate the system with information about the application and to allow for heuristics and empirical constraints and rules, and to provide the structural Failure and Repair mechanisms I outlined above. This approach builds on top on existing control techniques but incorporates a new concept of structural adaptation that is essential for the level of autonomy posed in your question.

Verification: many researchers have proposed methods for hybrid systems validation and verification prior to deployment. In our approach we are happy to use some of these techniques. Our contribution in this area called for verification of online designs, as discussed above. So, we are developing hybrid systems to model verification principles, with the idea to amalgamate them in near real time to our applications.


Luke: Do you think our capacity to make systems more autonomous and capable will outpace our capacity to achieve confident safety assurances for those systems?


Wolf: I believe we have made great progress on increasing autonomy in most of the systems we are designing today. I also believe we have paid far less attention to develop the methods, sensory redundancy principles, and theory of design for safety performance.

Nerode and I are working on developing an advanced magnetic battery using quantum hybrid control methodology. We found that the key ingredients of this battery for it to operate safely are principles of safety that have been used in non-autonomous systems for the last 100 years. We are encoding these principles formally as part of our design algorithms. Perhaps this way may be generalized to obtain acceptable levels of safety.


Luke: Thanks, Wolf!