Dr. David A. Cook is Associate Professor of Computer Science at Stephen F. Austin State University, where he teaches Software Engineering, Modeling and Simulation, and Enterprise Security. Prior to this, he was Senior Research Scientist and Principal Member of the Technical Staff at AEgis Technologies, working as a Verification, Validation, and Accreditation agent supporting the Airborne Laser. Dr. Cook has over 40 years’ experience in software development and management. He was an associate professor and department research director at USAF Academy and former deputy department head of Software Professional Development Program at AFIT. He has been a consultant for the Software Technology Support Center, Hill AFB, UT for 19 years.
Dr. Cook has a Ph.D. in Computer Science from Texas A&M University, is a Team Chair for ABET, Past President for the Society for Computer Simulation, International, and Chair of ACM SIGAda.
Luke Muehlhauser: In various articles and talks (e.g. Cook 2006), you’ve discussed the software verification, validation, and accreditation (VV&A) process. Though the general process is used widely, the VV&A term is often used when discussing projects governed by DoD 5000.61. Can you explain to whom DoD 5000.61 applies, and how it is used in practice?
David A Cook: DOD 5000.81 applies to all Department of Defense activities involving modeling and simulation. For all practical purposes, it applies to all models and simulations that are used by the DOD. This implies that it also applies to all models and simulations created by civilian contractors that are used for DOD purposes.
The purpose of the directive, aside from specifying who is the “accreditation authority” (more on this later) is to require Verification and Validation for all models and simulation, and then also to require that each model and simulation by accredited for its intended use. This is the critical part, as verification and validation has almost universally been a part of software development within the DOD. Verification asks the question “Are we building the system in a quality manner?”, or “are we building the system right?”. Verification, in a model (and the resulting execution of the model providing a simulation) goes a bit further – and asks the question “Does the model build and the results of the simulation actually represent the conceptual design and specifications of the system we built?” The difference is that in a model and simulation, you have to show that your design and specifications of the system you envision are correctly translated into code, and that the data provided to the code also matches specification.
Validation asks the question “are we building a system that meets the users’ actual needs?”, or “are we building the right system?” Again, the verification of a model and resulting simulation is a bit more complex than non-M&S ”verification”. In modeling and simulation, verification has to show that the model and the simulation both accurately represent the “real world” from the perspective of the intended use.
These two activities are extremely difficult when you are building models and providing simulation results for notional systems that might not actually exist in the real world. For example, it would be difficult to provide V&V for a manned Mars mission, because, in the real world, there is not a manned Mars lander yet! Therefore, for notional systems, V&V might require estimation and guesswork. However, guesswork and estimation might be the best you can do!
5000.61 further requires that there be an ultimate authority, the “accreditation authority”, that is willing to say “based on the Verification and Validation performed on this model, I certify that it provides answers that are acceptable for its intended use”. Again, if you are building a notional system, this requires experts to say “These are guesses, but they are the best guesses available, and the system is as close a model to the real world as possible. We accredit this system to provide simulation results that are acceptable.” If, for example, an accredited simulation shows that a new proposed airplane would be able to carry 100,000 pounds of payload – but the result airplane, once built, can only carry 5,000 pounds – the accreditation authority would certainly bear some of the blame for the problem.
In practice, there are process for providing VV&A. Military Standard 3022 provides a standard template for recording VV&A activities, and many DOD agencies have their own VV&A repository where common models and simulation VV&A artifacts (and associated documentation) are kept.
There are literally hundreds of ways to verify and validate a model (and it’s associated simulation execution). The V&V “agents” (who have been tasked with performing V&V) provide a recommendation to the Accreditation Authority, listing what are acceptable uses, and (the critical part) the limits of the model and simulation. For example, a model and simulation might provide an accurate representation of the propagation of a laser beam (in the upper atmosphere) during daylight hours, but not be a valid simulation at night, due to temperature-related atmospheric propagation. The same model and simulation might be a valid predictor of a laser bouncing off of a “flat surface”, but not bouncing off of uneven terrain.
Read more »