Dr. Suresh Jagannathan joined DARPA in September 2013. His research interests include programming languages, compilers, program verification, and concurrent and distributed systems.
Prior to joining DARPA, Dr. Jagannathan was a professor of computer science at Purdue University. He has also served as visiting faculty at Cambridge University, where he spent a sabbatical year in 2010; and as a senior research scientist at the NEC Research Institute in Princeton, N.J.
Dr. Jagannathan has published more than 125 peer-reviewed conference and journal publications and has co-authored one textbook. He holds three patents. He serves on numerous program and steering committees, and is on the editorial boards of several journals.
Dr. Jagannathan holds Doctor of Philosophy and Master of Science degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He earned a Bachelor of Science degree in Computer Science from the State University of New York, Stony Brook.
Luke Muehlhauser: From your perspective, what are some of the most interesting or important developments in higher-order verification in the past decade?
Suresh Jagannathan: I would classify the developments of the past decade into four broad categories:
- The development of higher-order recursion schemes (or higher-order model checking). This approach is based on defining a language semantics in terms of a (recursive) tree grammar. We can now ask whether the tree generated by the grammar satisfies a particular safety property. Because recursion schemes are very general and expressive structures, they can be viewed as natural extensions of model checkers based on pushdown automata or finite-state systems. In the past decade, there have been substantial advances that have made higher-order model checking a practical endeavor, accompanied by realistic implementations.
- Liquid Types combine facets of classic type inference techniques found in functional language with predicate abstraction techniques used in model checkers to automatically infer dependent type refinements with sufficient precision to prove useful safety properties about higher-order functional programs. Implementations of this approach found in OCaml, SML, and Haskell have demonstrated that it is indeed possible to verify non-trivial program properties without significant type annotation burden.
- There have been substantial advances in the development of new languages built around rich dependent type systems that enable the expression and static verification of rich safety and security specifications and properties of higher-order programs. These include the languages that support mechanized proof assistants like Coq, Agda, or Epigram, languages like F* geared towards secure distributed programming, libraries like Ynot that encapsulate imperative (stateful) features and Hoare-logic pre/post-conditions within a higher-order dependent type framework, and features such as GADTs and type classes found in languages like GHC.
- Polyvariant control-flow analyses like CFA2 or higher-order contracts are two techniques that facilitate verification and analysis of dynamic higher-order languages. CFA2 adopts pushdown models used in program analyses for first-order languages to a higher-order setting, enabling precise matching of call/return sequences. Higher-order contracts allow runtime verification and blame assignment of higher-order programs and are fully incorporated into Racket, a multi-paradigm dialect of Lisp/Scheme.
Luke: What are some of the “realistic implementations” of higher-order model checking that you refer to?
Suresh: The most scalable implementation to date is the tool Preface (Ramsay et. al, POPL’14) that is capable of handling recursion schemes of several thousand rules (i.e., function definitions). Earlier work such as TRecS (Kobayshi, POPL’09) have good performance for recursion schemes of up to hundreds of rules.
Luke: What are your recommended readings on recent interesting work on liquid types and on polyvariant control-flow analyses?
- Vazou, Rondon, Jhala, “Abstract Refinement Types”, European Symposium on Programming, 2013, p. 209-228
- Rondon, Kawaguchi, Jhala, “Low-Level Liquid Types”, ACM Symposium on Principles of Programming Languages (POPL), 2010, pp. 131-144.
- Rondon, Kawaguchi, Jhala, “Liquid Types”, ACM Conference on Programming Language Design and Implementation, 2008, pp. 159-169.
- Kawaguchi, Rondon, Bakst, Jhala, “Deterministic Parallelism via Liquid Effects”, ACM Conference on Programming Language Design and Implementation, 2012, pp. 45-54.
- Zhu, Jagannathan, “Compositional and Lightweight Dependent Type Inference for ML”, Conference on Verification, Model-Checking and Abstract Interpretation, 2013, pp. 295-314.
Polyvariant Control-Flow Analysis
- Midtgaard, “Control-Flow Analysis of Functional Programs”, ACM Computing Surveys, 2012, 44(3).
- Earl, Sergey, Might, Van Horn, , “Introspective Pushdown Analysis of Higher-Order Programs”, International Conference on Functional Programming, 2012, pp. 177-188.
- Might, Smaragdakis, Van Horn, “Resolving and Exploiting the k-CFA Paradox: Illuminating Functional vs. Object-Oriented Program Analysis”, ACM Conference on Programming Langauge Design and Implementation, 2010, pp. 305-315.
- Vardoulakis, Shivers, “CFA2: A Context-Free Approach to Control-Flow Analysis”, Logical Methods in Computer Science, 2011, 7(2).
Luke: What are some key developments and solutions you hope to see in the next 5-10 years of research into higher-order verification techniques? Which breakthroughs could be most important, if they could be achieved, given our current understanding?
Suresh: Currently, much of the work on higher-order verification centers around foundations (the definition of expressive models for describing salient properties that arise in HO programs, as is the case with current work in higher-order recursion schemes, dependent type systems, or control-flow analysis), or pragmatics (reducing type annotation burden as in Liquid Types). Going forward, I would expect to see a convergence of these techniques that enable applying new expressive and foundational models at scale.
One important breakthrough would be the integration of these systems into realistic compilers that would go beyond simple safety and property checking, to inform new kinds of verifiable optimizations and program transformations. One can envision using these techniques to use the specifications HO verification techniques support, as a way of realizing tangible code improvements and performance gains in verified compilers and runtime systems. Another important advance would be a deeper understanding of the potential synergistic interplay between higher-order verification techniques and other kinds of specification systems (e.g., session types). The issues here deal with how we would express rich protocols that capture not only safety requirements, but liveness and other kinds of sophisticated modalities.
Luke: Thanks, Suresh!
Did you like this post? You may enjoy our other Conversations posts, including: