Skip to content ↓ | Skip to navigation ↓

We recently received three  questions from a customer that I thought were worth sharing, along with the answers. If you’re not tracking the vulnerability management space on a daily basis, there are a few things that aren’t really intuitive.

How many new vulnerabilities are published each month?

To answer this question, you first have to pick a source for determining what’s published. There’s more than one database of published vulnerabilities out there, but I’ll use data from the US National Vulnerability Database, which is basically keyed off of CVE IDs produced by Mitre. It’s a reasonable standard, though there are certainly others like the OSVDB and SecurityFocus.

CVEs_by_Year_NVD

What does this chart tell us? It tells us that every year there are thousands of new vulnerabilities published. It’s important to understand that this isn’t the same as the number of vulnerabilities that exist, nor is it the same as the number of vulnerabilities you have in your environment.

In fact, there are probably thousands more vulnerabilities that exist that haven’t been published. Those would fall into the category of zero-days; conditions that are not publicly known.

Also, you probably don’t have all of these vulnerabilities in your environment because you don’t have all of the affected software, and more than a few of them have available patches, and you have applied at least some of those patches.That’s an awful lot about what this chart doesn’t tell us. Again, every year there are thousands of newly published vulnerabilities.

Now, this is not broken down by month, as the original question asked. If they were distributed evenly, you’d have roughly 382 new CVEs published per month. The actual distribution for 2013 so far, from the NVD data, looks like this.

CVEs_by_month_2013_NVD

We might draw the reasonable conclusion that there are between 200 and 500 new vulnerabilities published each month.

Doesn’t this make comparing vulnerability counts month to month hard?

It absolutely does! There’s an assumption in this question that’s really important. The assumption is that there’s value in comparing vulnerability counts from one time period to the next. It makes sense, right? If your objective is to reduce vulnerabilities, then the metric by which you should measure success is vulnerability count.

Honestly, there’s nothing wrong with that statement; it’s a nice, tight coupling of objective and metric. The problem there is that the objective is usually *not* to reduce vulnerabilities. A vulnerability, after all, is really just a proxy for some amount of risk; the objective is usually to reduce risk, and it is possible, if not likely that you can reduce risk while increasing vulnerability count. You may have less vulnerability risk, but more vulnerabilities next month.

There are two reasons for this reality. First, not all vulnerabilities present the same amount of risk. Let’s look at two examples:

CVE-2012-2897 MS12-075 TrueType Font Parsing Vulnerability

  • This vulnerability provide local privileged access to a host and has a CVSS Base score of 10 and a Tripwire score of 116.

CVE-2012-3515 RHSA-2012:1236: Xen Local Privilege Escalation Vulnerability

  • This vulnerability provides local privileged access to a host and has a CVSS Base score of 7.2 and a Tripwire score of 3.

Regardless of which scoring system you use, these two conditions do not present the same amount of risk, though they both provide the same depth of access if exploited. In both cases, the explanation has to do with how the vulnerability is exploitable (you can read this for more information about vulnerability scores), but the point is that they are not the same.

When you remediate a vulnerability, you remove that vulnerability’s risk from your environment. You might remove multiple low scoring vulnerabilities, or a single high scoring vulnerability and achieve the same risk reduction.

If we can’t compare vulnerability counts, how can we tell that we’re fixing things?

First, don’t compare vulnerability counts. It’s a bad metric and you should avoid it. Instead, track vulnerability risk and remediation activities. These are two sides of the same process, so tracking both allows you to manage the relationship between the two. Again, this assumes the objective of reducing vulnerability risk, but you may have other objectives to track.

In this case, however, the metrics you probably want to consider are a vulnerability risk score from your VM tool, and a remediation performance metric most likely generated from your workflow system. Vulnerability aging is a popular metric for tracking remediation activity, but be forewarned that you’re using a secondary measurement in that case; the environment may change in ways that affect vulnerability aging without corresponding remediation activity.

Ideally, as you drive remediation processes, you’ll see metrics that reflect that activity and a corresponding reduction in vulnerability risk. If you don’t, then you start investigating why and tweaking the process. Could be that you’re not taking action on the highest risk vulnerabilities, or that you’re not keeping pace with new vulnerability risk (not count!) being introduced.

By understanding the relationship between vulnerability counts, vulnerability risk and remediation activities, you can avoid some of the pitfalls of vulnerability mis-management and ensure that your organization not only effectively measures risk, but manages to reduce it over time as well.

 

Related Articles:

 

P.S. Have you met John Powers, supernatural CISO?

 

Title image courtesy of ShutterStock