“All we care about are actual vulnerabilities, and if they’ve been patched“ the Security Officer said. I hope she was mistaken, because her statement suggests her organization doesn’t have (among other things) a clear picture of their risks.
Systems should deliver net value. The decision to authorize a system should therefore consider the risks the system poses, in addition to the benefits it delivers. That makes risk assessment an important part of an organization’s decision to authorize a system for use.
In the NIST Risk Management Framework (RMF), coarse-grained risk assessment is performed as a result of the very first step, system categorization. That’s because the categorization determines the security control baseline. A more explicit and more detailed risk assessment is performed later in RMF task 5-3, immediately before deciding if the risk is acceptable enough to authorize the system.
High-level view of NIST guidance on risk assessment. NIST SP 800-30 suggests describing how threat sources initiate threat events, that exploit vulnerabilities.
Vulnerabilities are defined in SP 800-30 as:
“a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.”
The above definition of vulnerability could easily be interpreted as (for example) the more than 75 thousand vulnerabilities in the National Vulnerability Database (NVD). But would that be practical? Would you want to read and act on a document that discusses >75,000 things, and is guaranteed to be out of date by the time you are done reading it?
So exactly which vulnerabilities should go into a risk assessment? Do we pick the worst of the worst from the NVD, and concentrate on those?
A far better idea would be to use higher level, more abstract vulnerabilities in your risk assessments, and ignore the concrete vulnerabilities of real components. For example, use “Buffer overflow” in general, instead of specific buffer overflows in specific components. This vulnerability category should then have controls that ensure you are not susceptible to buffer overflows regardless of the component. At least I hope you don’t want to presume you are safe from yet unknown buffer overflows, just because you’ve got all the applicable buffer overflows in the NVD patched! What’s really needed (in the context of this example) are controls that ensure testing of every possible field, each time a patch is applied, or validation of field lengths by a higher-level, more general filter, before a field is further processed.
Risk assessments should therefore discuss abstract vulnerability categories, unless a specific concrete vulnerability is so ubiquitous and severe that you feel it needs to be elevated from the tactical level to the strategic. Maybe POODLE, Heartbleed, and ShellShock would warrant such treatment. I doubt it though.
A candidate listing of vulnerability categories can be found in the Common Weakness Enumeration (CWE). While the 1,004 entries in the CWE are far more tractable then the 75k NVD vulnerabilities, they are probably still too much. You might consider using only the leaves of the hierarchy, or only some of the upper nodes.
I’m not saying that vulnerabilities of any kind should always be ignored. Clearly, the NVD is very important. The NVD needs to be monitored for flaws in the components you use. Vulnerable components need to be expeditiously patched or replaced (provided that doesn’t make the situation even worse). That maintenance is part of your day-to-day operations, rather than strategic planning. The excessive details of concrete vulnerabilities are worse than useless in a in a risk assessment: they will drown you in an ocean of minutiae. That ocean doesn’t belong in a planning document that’s intended for human consumption.