Greg Morrisett on Secure and Reliable Systems

 |   |  Conversations

Greg Morrisett portrait Greg Morrisett is the Allen B. Cutting Professor of Computer Science at Harvard University. He received his B.S. in Mathematics and Computer Science from the University of Richmond in 1989, and his Ph.D. from Carnegie Mellon in 1995. In 1996, he took a position at Cornell University, and in the 2003-04 academic year, he took a sabbatical and visited the Microsoft European Research Laboratory. In 2004, he moved to Harvard, where he has served as Associate Dean for Computer Science and Engineering, and where he currently heads the Harvard Center for Research on Computation and Society.

Morrisett has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship.

He served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems and Information Processing Letters. He currently serves on the editorial board for The Journal of the ACM and as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research’s Technical Advisory Board, and Microsoft’s Trusthworthy Computing Academic Advisory Board.

Luke Muehlhauser: One of the interesting projects in which you’re involved is SAFE, a DARPA-funded project “focused on a clean slate design for resilient and secure systems.” What is the motivation for this project, and in particular for its “clean slate” approach?


Greg Morrisett: I think that from DARPA’s perspective, the primary force is that what we’ve been doing so far in terms of securing systems isn’t working.  I suspect that some high-profile problems with successful attacks from both nation states and other organizations has made them desperate to try new and different things.

At the same time, there was a growing recognition that legacy software (and potentially hardware) was a big part of the problem. If, for instance, all of our critical software were coded in type-safe languages, then a lot of low-level problems (e.g., buffer overruns) would be gone.  So I think this was the thinking behind the whole CRASH program and there are something like 20 different teams trying a range of ideas.

Our particular project (SAFE) took a very radical approach in wanting to re-think everything:  the languages, the operating system, and even the hardware, as well as an emphasis on formal methods for trying to gain some assurance.  The team has designed some very interesting hardware that (a) effectively enforces type-safety at the machine level (i.e., object-level integrity, pointers as capabilities, etc.), and (b) provides novel support for doing fine-grained end-to-end information flow tracking.  We’ve also designed a high-level, dynamically-typed language (Breeze) that has pervasive information flow tracking built in.  This means that we can tell which principal actors may have influenced the construction of any value in the system. We’re now building compiler support for Breeze and systems-level software (e.g., scheduler, garbage collector, etc.) to bridge the high-level code and the hardware.

At the same time, we’ve been trying to apply formal models and reasoning to try to prove key properties about models of the system.  For example, we’ve had machine-checked proofs of security for the (core of the) language from the very beginning, and are currently building core models of the hardware and trying to establish similar properties.


Luke: Could you explain, preferably with examples, what you mean by “type-safety at the machine level” and “fine-grained end-to-end information flow tracking”?


Greg: So let’s first start with “type-safety”. In this context, I simply mean that we make a distinction between different values in memory and the machine will only allow appropriate operations to be applied to appropriate values. For instance, we make a distinction between pointers to objects (of a given size) and integers, and the machine will only let you load/store relative to a pointer (and within the bounds of the object to which it points). It will trap if you try to load/store relative to an integer. Similarly, we make a distinction between references to code and data, so you’re only allowed to jump-to or call references to code, and not data. These basic integrity properties block a whole range of basic attacks that rely upon violating abstractions (e.g., code injection, buffer overruns, etc.)

With respect to “information flow control” (IFC): we want to enforce policies like “unauthorized principal actors, say a web site with whom I’m communicating, should never learn my private information, say my users’ social security number of password”. The hard part is the “learn no information” as opposed to literally passing the data. If, for instance, I reverse your password and send it out, or take every other letter from your password, we want those to be seen as violations as well as explicitly copying the data. (There are much more subtle ways to communicate information as well.)

Formally, the policy we aim for is that the observations of the “unauthorized principal actors” should be the same if I change the password to an arbitrary string. The technical term for this is “non-interference”.

The way we implement IFC is by labeling every piece of data with an information flow control policy. When you combine two pieces of data (say add two numbers), we have rules for computing a new policy to place on the newly generated value. For example, if you add a “secret” and a “public” number, then the result is conservatively classified as “secret”.


Luke: In a blog post, I warned readers that no system is “provably secure” or “provably safe” in the sense of a 100% guarantee of security or safety, because e.g. a security/safety definition might be subtly wrong, or a particular security reduction might have an error in it. These things have happened before. I said that “Rather, these approaches are meant to provide more confidence than we could otherwise have, all else equal, that a given system is secure [or safe].”

Do you agree with that characterization? And, on the terminological issue: do you think terms like “provably secure” and “provably safe” are too misleading, or are they just another inevitable example of specialist terminology that is often but unavoidably interpreted incorrectly by non-specialists?


Greg: Yes, I completely agree with your blog post! All “proofs of correctness” for a system will be subject to modeling the environment, and specifying what we mean by correct. For real systems, modeling will never be sufficient, and formalizing the notion of correctness may be as complicated as reimplementing the system.

I do think that building explicit models of the environment, and writing formal specifications and constructing formal proofs has a lot of value though. As you noted, it really does help to increase confidence. And, it helps identify assumptions. Many times, an attacker will look at your assumptions and try to violate them to gain purchase. So identifying assumptions through modeling is a great exercise.

Going through the process of constructing a formal proof also leads to a lot of good architectural insight. For instance, when we were trying to construct a proof of correctness for Google’s Native Client checker, we realized it would be too hard with the tools we had at hand. So we came up with a simpler algorithm and architecture which was easier to prove correct. It also turned out to be faster! This is detailed in a technical paper: RockSalt: better, faster, stronger SFI for the x86.

Finally, all of this is a great push-back on creeping complexity in design. I think a lot of problems arise because software is so soft, and we write a lot of code without thinking deeply about what it should (not) be doing.

Back to your question as to whether we should use the term “provably secure”: probably not. I would like to use the term “certified code” because we are going through a very rigorous certification process. But unfortunately, the term “certified” has come to mean that we’ve followed some bureaucratic set of development processes that don’t actually provide much in the way of security or reliability. So I’m open to suggestions for better terms.


Luke: Harper (2000) quotes an old message by Ken Frith from the Safety Critical Mailing List:

The thought of having to apply formal proofs to intelligent systems leaves me cold. How do you provide satisfactory assurance for something that has the ability to change itself during a continuous learning process?

How do you think about the opportunities and limitations for applying formal methods to increasingly intelligent, autonomous systems?


Greg: I’ll be honest and say that I haven’t given it much thought. Perhaps the closest thing that I’ve worked on is trying to formalize information-theoretic security for cryptographic schemes and construct formal proofs in that setting. This is an area where I think the crypto/theory community has provided enough structure, we can hope to make progress, but even here, we have all sorts of shaky assumptions at the foundations.

I suspect that for intelligent and autonomous systems work (especially in the cyber-physical realm), the first real challenge is constructing appropriate models that are over-approximations of potential failure modes, and try to find something that is tractable to work with, similar to the way that crypto tries to model adversaries.


Luke: In general, for systems that need to meet a high standard of security, to what degree do you think we can take existing systems that were mostly designed to be effective, and “bolt security onto them,” versus needing to build things from the ground up to be secure? Perhaps some specific examples inform your view?


Greg: I think re-architecting and re-coding things will almost always lead to a win in terms of security, when compared to bolt-on approaches. A good example is qmail which was designed as a secure alternative to sendmail, and in comparison, had a good architecture that made it much easier to get most things right, including and especially, configuration. As far as I know, there’s only been one bug in qmail (an integer overflow) that could potentially lead to a security problem, whereas there are many, many issues with sendmail (I’ve lost count.)

But, in the end, security is at best a secondary consideration for almost all systems. And as much as I wish we could throw away all of the code and take the time to re-architect and re-code lots of things (especially using better language technology), it’s too expensive. For instance, Windows XP was over 50 million lines of code (I don’t know how big Win7 or Win8 are) so re-coding Windows just isn’t a realistic possibility.

What we need instead are tools and techniques that people can buy into now, and help block problems with legacy systems. Ideally, these same tools should give you a “pay-as-you-go” kind of property: a little bit of effort buys you some assurance, and more effort buys you a lot more.

A good example is the kind of static analysis tools that we’re starting to see for legacy C/C++ code, such as Coverity, HP/Fortify, or Microsoft’s Prefast/Prefix/SLAM/etc. These are tools that automatically identify potential errors which in turn could lead to vulnerabilities. Today, if you run say these tools on a random pile of C/C++ code, you’ll find that it gives lots of false positives. But if you work to minimize those warnings, then you’ll find that the tool becomes much more effective. And, for new code, it works quite well as long as you pay attention to the warnings and work to get rid of them.


Luke: Thanks, Greg!