Jonathan Millen on covert channel communication

 |   |  Conversations

Jonathan Millen portrait Jonathan Millen started work at the MITRE Corporation in 1969, after graduation from Rensselaer Polytechnic Institute with a Ph.D. in Mathematics. He retired from MITRE in 2012 as a Senior Principal in the Information Security Division. From 1997 to 2004 he enjoyed an interlude as a Senior Computer Scientist in the SRI International Computer Science Laboratory. He has given short courses at RPI Hartford, University of Bologna Summer School, ETH Zurich, and Taiwan University of Science and Technology. He organized the IEEE Computer Security Foundations Symposium (initially a workshop) in 1988, and co-founded (with S. Jajodia) the Journal of Computer Security in 1992. He has held positions as General and Program Chair of the IEEE Security and Privacy Symposium, Chair of the IEEE Computer Society Technical Committee on Security and Privacy, and associate editor of the ACM Transactions on Information and System Security.

The theme of his computer security interests is verification of formal specifications, of security kernels and cryptographic protocols. At MITRE he supported the DoD Trusted Product Evaluation Program, and later worked on the application of Trusted Platform Modules. He wrote several papers on information flow as applied to covert channel detection and measurement. His 2001 paper (with V. Shmatikov) on the Constraint Solver for protocol analysis received the SIGSAC Test of Time award in 2011. He received the ACM SIGSAC Outstanding Innovation award in 2009.

Luke Muehlhauser: Since you were a relatively early researcher in the field of covert channel communication, I’d like to ask you about the field’s early days, which are usually said to have begun with Lampson (1973). Do you know when the first covert channel attack was uncovered “in the wild”? My impression is that Lampson identified the general problem a couple decades before it was noticed being exploited in the wild; is that right?


Jonathan Millen: We might never know when real covert channel attacks were first noticed, or when they first occurred. When information is stolen by covert channel, the original data is still in place, so the theft can go unnoticed. Even if an attack is discovered, the victims are as reluctant as the perpetrators to acknowledge it. This is certainly the case with classified information, since a known attack is often classified higher than the information it compromises. The only evidence I have of real attacks before 1999 is from Robert Morris (senior), a pioneer in UNIX security, and for a while the Chief Scientist of the National Computer Security Center, which was organizationally within NSA. He stated at a security workshop that there had been real attacks. He wouldn’t say anything more; it was probably difficult enough to get clearance for that much.


Luke: From your own perspective, what are some of the most interesting developments in covert channel communication since you wrote “20 Years of Covert Channel Modeling and Analysis” in 1999?


Jonathan: That’s a tough question, for several reasons. Like many research areas that have been in existence for many years, covert channel research has splintered into several specialties. Some of the subtopics are noninterference models, language-based information control via type theory, steganography, and so on.

While specialization is necessary for deep progress, I wish there were more attention given to the pragmatic “so what” questions about covert channels. For example, information flow security is often defined in terms of noninterference, which can, unfortunately, be defined in several subtly different ways. All of them rely on some form of behavioral equivalence, a strong condition guaranteeing that an unauthorized observer cannot tell when a more privileged user is present. Even one bit, or a fraction of a bit, of information about the protected party’s data or actions is considered too much. Yet, what we really want to know is whether a recurrent channel can be set up, yielding a large or unbounded amount of information. Furthermore, information in the Shannon sense may or may not be useful. One paper that attracted my attention is “Quantifying information flow with beliefs”, by Clarkson, Myers, and Schneider, 2009, because it dealt with the accuracy of information.

Another pragmatic issue is the trustworthiness of the system functionality. In general, I am distrustful of security approaches that depend on access control software or the design of the language in which it is written. Even the firmware and hardware below the operating system kernel is suspect. In the last few years, my main research interest was in trusted computing, as supported by trusted platform modules, with their capability — available but largely unused in modern platforms — to cryptographically check the integrity of system software by a bootstrap sequence starting with the tamper-resistant TPM. And still, having done that, we know only that the system software is genuine, not that it is correct or free from covert channels. It is encouraging that work in software and hardware verification continues, but commonly used systems are still beyond its reach.

Complexity is the enemy of security. The more you have to depend on, the less likely it is that you can understand it well enough to exclude vulnerabilities. A couple of corollaries: the more complex your security policy is, the less robust it is, because more complex software is needed for it. And the more specialized a system component is to support security, the more attractive it is as a target for an attack. I suppose that the optimistic view of this is that if we use a typically complex modern system, covert channels are the least of our worries!


Luke: From your many decades of experience in computer security, do you know of cases where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway — e.g. perhaps because they expected the solution would require a decade or two of “serial” research and/or engineering work, with each piece building on the ones before it, and they wanted to be prepared to meet the challenge near when it arrived? Lampson’s early identification of the “confinement problem” looks to me like it might be one such example, but maybe I’m misreading the history there.


Jonathan: In computer security, what usually happens is that someone realizes that a vulnerability already exists, but it is not clear how long it will take for a malicious party to take advantage of it. Another factor delaying the onset of relevant research and development is that long-term efforts aimed at removing vulnerabilities are risky and their results may be inadequate or eclipsed by different problems in the future.

If the difficulty is social, such as getting standards updated, the difficulty is real, but the basic engineering knowledge is mostly available early on. For example, the TCP sequence number attack for session hijacking was pointed out by Morris in 1985, taken advantage of by Mitnick in 1995, led to Bellovin’s standards-related recommendations in 1996, and only recently has the availability of encryption-based authentication and IPv6 brought a widespread solution within reach.

If the difficulty is in the engineering, such as the development of security kernels and mandatory access policies to defeat Trojan horses, or the installation and use of TPMs, there is an industry to be moved, one driven by the need for commercial viability of new products. And after all that effort, some vulnerabilities remain.

Computer scientists can help by developing powerful general-purpose tools and techniques with fundamental computer science goals — tools that are not problem-dependent. By this, I mean such things as verification tools and model checkers, as well as encryption algorithms and applications. Such tools can ease the engineering burden when new problems and design ideas emerge. Development of these tools deserve decades of research building on prior work, and those who are capable of conducting this activity successfully should be encouraged and supported.


Luke: You mention verification tools and model checkers. I was recently speaking to a significant figure in the computer security and safety community (I don’t have permission to give her name) about the common claim that formal methods can catch corner bugs which are unlikely to be caught by testing alone. She mentioned that if the formal methods community has ever subjected formal methods to careful experimentation to demonstrate that verification really does catch bugs that aren’t likely to be found by testing, she wasn’t aware of it. Are you aware of any such demonstrations? If not, how would you describe the value in formal methods for safety and security purposes?


Jonathan: It seems unfair to me to ask for careful experimentation to justify formal methods in comparison to testing. One reason for this that was pointed out when formal methods were being recommended for security kernel verification, is that very few security kernels are actually developed, so any experimental results are not going to be statistically significant.

One experiment that was done in the very beginning of the history of formal specifications was by David Parnas, credited by some with inventing the notion of “formal nonprocedural specification.”1 In fact, many of us referred to such specifications at first as “Parnas specifications”, but he let it be known (through Peter Neumann) that he disapproved of that. Parnas’ experiment was a small classroom experiment designed to investigate, not the efficacy of verification of formal specifications, but rather the plausible idea that specifications determine implementations. Parnas’ classroom experiment, in which several groups were given the same specification to implement, disproved that (or at least cast doubt on that) because different groups implemented their specified task in very different ways, despite the very detailed constraints on input-output behavior in the specification. (I heard the experiment described at a conference presentation by Parnas, but I can’t find the paper or the citation now.)

This result turned out to be useful to me a few years later when DoD sponsors argued that analysis of the formal specifications of security kernels should be classified higher than the programs themselves, even when the analysis proved the absence of access control violations and storage channels, because the specifications allowed readers to deduce features of the implementation that would tell them about timing and other channels. I pointed out that Parnas had given evidence that their fears were not justified.

One indicator of the value of formal methods, model checking in particular, is the fact that it has been used routinely in the design of CPUs, which have rather complicated internal strategies for managing memory and register caching. (One citation I found with a quick Google to confirm this is “Fifteen years of formal property verification in Intel”, by Limor Fix, of Intel Research Pittsburgh, published in a 2008 Springer-Verlag book, 25 Years of Model Checking.)


Luke: You also wrote that “Computer scientists can help by developing powerful general-purpose tools and techniques… [e.g.] verification tools and model checkers, as well as encryption algorithms and applications… Development of these tools deserve decades of research building on prior work, and those who are capable of conducting this activity successfully should be encouraged and supported.”

What are some specific examples of tools for software system safety or security that you wish were receiving more development talent and funding than is currently the case? E.g. probabilistic model-checkers, or verified libraries for program synthesis, or whatever you think is most urgent or underfunded.


Jonathan: My gut reaction is, all of the above. When it comes to specific tools, each of us has preferences based on what we needed, what we have used, and what is most accessible. When I was at SRI International, I used the PVS theorem prover, and later, SRI’s SAL environment, which hosts several flavors of model checking algorithms. Both of those have benefited from the highest levels of talent; the challenge is that the job is never finished, especially if it is a thriving, well-used system. Funding sources typically emphasize urgent applications or brand new ideas, as they should, but they tend to neglect the steady, unromantic work needed to keep a software suite up to date with respect to the evolution of hardware, operating systems, algorithmic advances, and so on. And tools that support formal methods do not have the same opportunities for income as Windows, Apple’s OS X, and other commercial systems, even with a strong user community.

Incidentally, although I have emphasized general-purpose tools, that does not mean that I am against tools designed for particular applications such as security. After all, I spent a fair amount of time on a protocol security analyzer (called the Constraint Solver). Specialized tools, however, benefit in brevity and readability from being built on top of an expressive language with powerful primitives — for my application, Prolog, specifically SWI-Prolog.

A word of caution, though: it’s one thing to build an analysis tool with a complex system under it, and another to build a supposedly secure system that way. A lesson learned from studying covert channels is that the whole system under the secure interface is a potential source of vulnerabilities. So how can we build secure systems? The main requirement, I think, is simple: don’t let the enemy program your computer (the computer holding your data). But how can we prevent that? That’s the hard part.


Luke: Thanks, Jonathan!


  1. Parnas (1972)