Roger R. Schell is a Professor of Engineering Practice at the University Of Southern California Viterbi School Of Engineering, and a member of the founding faculty for their Masters of Cyber Security degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of Aesec Corporation, a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “Class A1”) commercial product, the Gemini Multiprocessing Secure Operating System (GEMSOS). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the Trusted Computer System Evaluation Criteria (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the National Cyber Security Hall of Fame.
Luke Muehlhauser: You have several decades of experience in high assurance computer security, much of it in the military and government, and you discussed your experience at length in this interview. One thing I’m curious about is this: do you know of cases where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway — e.g. perhaps because they expected the solution would require a decade or two of “serial” research and/or engineering work, with each piece building on the ones before it, and they wanted to be prepared to meet the challenge near when it arrived? Lampson’s early identification of the “confinement problem” — a decade or two before anyone detected such an attack in the wild, to my knowledge — looks to me like it might be one such example, but maybe I’m misreading the history there.
Roger R. Schell: First to perhaps clarify the context of my responses, let me refine your introductory summary of my experience in high assurance computer security, which you characterize by saying “much of it in the military and government”. My responses may be better understood by recognizing that although I am currently a Professor on the faculty of the University of Southern California, I spent the previous 28 years in industry in positions ranging from founder and executive in a few of information technology startups to corporate security architect and senior development manager for one of the largest software companies. This was substantially more time than the 22 years I spent in the military prior to that.
That said, your question about the reading of history does take me back to my military experience. You asked, about whether there was a case “where someone was worried about a computer security or safety challenge that wasn’t imminent but maybe one or two decades away, and they decided to start doing research to prepare for that challenge anyway”. From my perspective on that history the answer is a definite yes, as I noted in my 2001 paper on “Information security: science, pseudoscience, and flying pigs.” That paper referred to a major instance of that as follows:
The failure of penetrate and patch to secure ADP systems in the late sixties helped stimulate the Ware Report [in 1970], which represented a codification of the state of understanding, which primarily was a realization of how difficult the problem was. This was one of those points where understanding of concepts came together enough to allow a significant step forward.
The Ware Report [in 1970] clearly identified the problem, but left it unresolved. That led to the Anderson Panel [in 1972], which defined the reference monitor concepts and conceived a program for evaluating and developing kernels.
That same paper notes that the “confinement problem” that you cited from Butler Lampson, was recognized years earlier and generally termed the “multilevel security” problem. Butler in 1973 made a significant contribution by provide a term and providing a bit of a taxonomy. But as noted in the above paper:
Development of early military systems concluded that some portions of the system require particularly strong security enforcement. Specifically, this enforcement was necessary to protect data whose loss would cause extremely grave damage to the nation. Systems that handled such data, and simultaneously included interfaces and users who were not authorized to access such data, came to be known as “multilevel”.
It is pretty clear that stating the need to protect such data from loss is essentially equivalent to stating the need to solve what was later in 1973 referred to at the confinement problem. In fact by that time the “program” defined by Anderson was well underway, and it was as you put it “expected the solution would require a decade or two of ‘serial’ research and/or engineering work”.
So, I agree that with the substance of your conclusion that the confinement problem is one such example, but I would refine your reading of history to note that it was “decided to start doing research to prepare for that challenge” quite a while before the multilevel security challenge was termed the “confinement problem”. I think that refinement is consistent with the nice paper last year by Alex Crowell, et. al., entitled “The Confinement Problem: 40 Years Later”, which cites the 1973 report by D. E. Bell and L. J. LaPadula specifically directed at a mathematical model to address the multilevel security problem.
Luke: If someone wanted to find additional examples of people working for a decade or two “in advance” on a difficult computer security or safety challenge, who else would you recommend they ask, and where would you recommend they look for examples?
Roger: I would not expect to find a lot of examples of people working for a decade or two “in advance” for a couple of reasons. Even the few cases of which I am aware are not particularly notable for having the significant results successfully brought to bear to meet the challenge when it arrived.
First, in the U.S. Business culture rewards for executives seem to significantly diminish with the amount of time until investments result in return. Therefore, it is quite difficult to get a business to make a significant and persistent commitment to have people working on a challenge a decade or two “in advance”.
Second, in the U.S. government at the end of the cold war there seemed to be a significant loss of urgency in seriously addressing the problem of dealing with a witted adversary whose likely tool of choice is software subversion. The early efforts, such as the Anderson panel, were focused on high assurance security to address to what had more recently been termed the advanced persistent threat (APT). This was (and is) a genuinely “difficult” security challenge. For a number of years the government had given little attention to high assurance security.
In terms of past examples, the substantial Multics efforts over a number of years by Honeywell (initially General Electric) is one of the few. Decades in advance of widespread commercial need Multics addressed the challenge of creating a “computing utility” for which security was a central value proposition. Many decades later essentially this vision has been given the new name of “cloud computing” – unfortunately without significant attention to security. This early commercial investment was not particularly well-rewarded.
Intel provides a second example. The Multics innovations for security significantly influence Intel to make investments in their x.86 architecture in anticipation of demand for security. Their inclusion of Multics-like hardware segmentation and protection rings for security was not easy when the number of transistors a chip were scarce. Again, Intel did not get a good return on this investment. As professor Bill Caelli of Australia pointed out in his paper on trusted systems, the GEMSOS security kernel (for which I was the architect) was a rare example of an operating system actually using this powerful hardware support. This RTOS from a small business hardly constituted a major market win for the Intel investment in segmentation and protection rings.
In terms of other people to ask about this, I don’t know many. Dr. David Bell is the one who has rather systematically looked at the evolution of solution for difficult computer security challenges, as reflected in his 2005 retrospective invited paper (plus the addendum) on his security model work at ACSAC several years ago. It is clear that he has rather carefully thought about these issues.
Luke: From your perspective, what are some of the most important contemporary avenues of research on high assurance security systems?
E.g. in software safety the experts will list things like formal verification, program synthesis, simplex architectures, verified libraries and compilers like in VST, progress in formal validation ala Cimatti et al. (2012), “clean-slate” high assurance projects HACMS, systems-based approaches like Leveson (2012), and tools that make high assurance methods easier to apply. There’s some overlap between software safety and security, but what avenues of research would you name in a high assurance security context?
Roger: It seems to me that some of the most important contemporary avenues of research on high assurance security systems are those related to what over the years have been persistent “hard problems”. In my 2001 ACSAC invited essay on Information security, I listed half a dozen “Remaining Hard Problems”:
- Verifying the absence of trap doors in hardware.
- Verifying the absence of trap doors and other malicious software in development tools.
- Covert timing channels.
- Covert channels in end-to-end encryption systems.
- Formal methods for corresponding source code to a formal specification, and object code to source.
- Denial of service attacks.
It is not surprising, I suppose, that many of these were also in the high assurance security challenges identified more than 15 years earlier in the TCSEC (Orange Book) section on “Beyond Class (A1)”. I am disappointed in our profession and its sponsors, and have to say that from my perspective, this is still a reasonable list of hard problems, and remains an important set of contemporary avenues of research. Unfortunately, relative to the importance of high assurance security, very little well-focused research effort is being directed to them. From this list, one area is receiving some attention, although often buried in larger, less significant, projects: recent research in hardware verification is producing valuable results – which was at the top of my list of hard problems.
In addition to the hard problems, there is an additional area that I consider at least as important in terms of the potential for dramatic positive impact. This is what David Bell in his couple of Looking Back papers I referenced earlier called for in terms of pursuit of enhanced high assurance “security in the form of crafting and sharing reference implementations of widely needed components.” David called for “Producing security archetypes or reference implementations of common networking components”. Such reference implementations are secure systems capabilities that technologists think they can figure out how to produce, but the devil is in the details, so only research that includes an implementation can confirm or deny that hypothesis. He provided his list of proposed reference implementations. I have more recently in various public presentations proposed my own list which includes the following:
- MLS network attached storage (NAS)
- High Assurance MLS Linux, Unix, *ix
- Guards, filters, and other (CDSs)
- Networked Windows (Thin Client)
- Real-time exec (appliances)
- Critical infrastructure platform
- Identity mgt PKI (Quality Attribute)
- MLS handheld network devices (e.g., PDA)
- Confined financial apps (e.g., credit card)
In summary, my bottom line is that I believe that the most important contemporary avenues of research on high assurance security systems fall into theses two areas: (1) addressing the remaining hard problems for the future and (2) research projects that include creating reference implementations that create ways to leverage the successful research results of the past for practical secure systems.
Luke: You write “I am disappointed in our profession and its sponsors…” In retrospect, what seem to be the major obstacles to faster progress on the hard problems of computer security?
Two possible explanations that come to mind are: (1) Researchers and funders prefer easier research projects, because they have a higher chance of “success” even though the social value may be lower. And/or perhaps (2) hard problems tend to require long-term research efforts, and current institutions are less well-suited to executing long-term efforts.
But these are just two of my guesses; I’d like to hear your perspective on what the major obstacles seem to be.
Roger: Although not particularly insightful by itself, to start I think it is fair to say that the major reason for the lack of faster progress on the hard problems of computer security is the lack of substantial resources being applied in an informed way to address them. And that does quickly lead to your question of the major obstacles to doing so. It seems to me there are two tightly related aspects to that. First, there must be funders who are willing and able to sponsor such work. Second there must be motivated researchers interested in pursuing such research. Both are needed. Although practically speaking what researchers advocate and promote can significantly affect the funders, they are not likely to continue pursuing paths where funders seem independently disinterested (or even hostile).
I have already noted that in our culture rewards for executives seem to significantly diminish with the amount of time until investments result in return. Although that makes it difficult to get a commitment to long-term efforts, this is not unique to computer security, and various sustained research efforts are pursued notwithstanding that. Unfortunately, there is evidence that computer security continues to face some additional unique challenges in the availability of resources for addressing hard problems where the results can provide significantly higher assurance of security.
Among these challenges is the rather unusual characteristic of significant vested interests that may not really want high assurance security. I touched on some of this in my oral history interview you mentioned at the beginning of our exchange, and others have alluded to additional observations of this sort.
The comments that have been made related to potential financial vested interests include the following:
- Cyber security is a multi-billion dollar industry, with much of that revenue generated as the result of the rampant flaws in consumer products from low-assurance cyber components. In my history interview I mentioned that that the Black Forest Group, which is a consortium of international Fortune 50 kind of companies, gave up their pursuit of a high assurance Public Key Infrastructure (PKI) when “they concluded that the vested interests against high assurance just made it impractical”. That vested interest environment can discourage commercial sponsorship for working on the hard problems.
- Beyond consumer products, I also noted a couple of anecdotal reports from the aerospace industry discouraging broadly applicable, reusable high assurance solutions, because it was a threat to a significant revenue stream they have from repeatedly addressing security on an ad hoc basis in similar contexts.
- It seems the research community can have its own vested interest in having unsolved problems to work on. Several have commented on how what seems to be an almost systematic loss of corporate memory facilities resources for projects to reinvent the wheel. That does little to encourage researchers to give attention to the hard problems. I noted in my history interview reports that “people in the research community fought strongly against having The Orange Book be a standard because it dampened the interest in research”.
- The 1997 IEEE history paper by MacKenzie and Pottinger, reports the serious fragmentation of security research attention after moving away from a focused effort based on the Orange Book where “the path to achieving computer security appeared clear.” Faster progress on the hard problems of computer security is unlikely in the absence of relatively focused research attention.
In addition there have been separate comments about the challenges related to government policy issues:
- There has long been a tension between policy for security solution whose technology is closed restricted (e.g., classified, as in government cryptography) versus technology that is open and transparent (e.g., the Orange Book). This is discussed in some detail in the 1985 George Jelen (an NSA employee) Harvard University paper on “Information Security”, and the policy issues persist to this day. Significant influence toward restricted results does not enhance the prospects for researching security solutions. The cancellation of major commercial work on the difficult problem a high secure virtual machine monitor (sorely needed today for cloud computing) is a real-world example from the past of the draconian effect of government restrictions on access to research results. Paul Karger’s 1991 IEEE paper reported a major reason for that cancellation was that, “U.S. State Department export controls on operating systems at the B3 and Al levels are extremely onerous and would likely have interfered with many potential sale”.
- What has been called “the equities issue” reflects that intelligence gathering can benefit from exploiting vulnerabilities, and thus can create a powerful vested interest in there NOT being major progress in the hard problems of computer security. Bruce Schneier in his May 2008 blog argues that this is particularly a challenge when the same agency has major responsibly for both defensive solutions and exploiting vulnerabilities, e.g., he says “The equities issue has long been hotly debated inside the NSA.”
- Government policy can strongly favor their dominant control over security solutions, rather than encourage commercial development, especially for high assurance where it really matters. As mentioned by David Bell in his couple of Looking Back papers I cited earlier, a powerful monopolist ploy is to promise high assurance government endorsed solutions in the future to discourage applying in the near term resources supportive of commercial offerings. David reported that “Boeing’s Dan Schnackenberg believed that NSA through MISSI put all the Class A1 vendors out of business,” and I personally saw that kind of activity at that time, as well as more recently. There are other examples beyond MISSI with names like SAT, NetTop, SELinux, MILS and HAP, and like MISSI not one of them ever actually delivered verified high assurance, e.g., a Class A1 evaluation. As David Bell noted, such monopolistic government policy discourages commercial investment in making progress, including on hard problems.
- Resources are always scarce, and it seems that current policy for addressing cyber security strongly emphasizes massive surveillance trying to find evidence of exploitation of weak systems. The meager resources applied to high assurance defenses (which is the focus of the hard problems) against a witted adversary pales in comparison to the billions being spent on surveillance. As I said about surveillance in my July 2012 keynote article for ERCIM, “this misplaced reliance stifles introduction of proven and mature technology that can dramatically reduce the cyber risks to privacy… an excuse for overreaching surveillance to capture and disseminate identifiable information without a willing and knowing grant of access.”
In summary, I believe the primary reason we have not seen faster progress on the hard problems of computer security is the persistent decision not to apply significant resources. A major class of obstacles to applying resources are strong vested interest that are somewhat unique to computer security. There are not only commercial financial vested interests but also government policy vested interests that for years seem to have successfully created major obstacles.
Luke: Thanks, Roger!
Did you like this post? You may enjoy our other Conversations posts, including: