Skip to content ↓ | Skip to navigation ↓

Earlier this week, I had the honor of presenting on the topic of Continuous Diagnostics and Mitigation (CDM) alongside some very distinguished information security practitioners in the federal government. The CDM program didn’t spring up from nothing, and is itself more of an evolutionary step forward than a brand new entity.

A discussion of how technology is evolving to meet the requirements of CDM requires some historical context. For those of you outside the federal government, you should know that this brief history is a very sparse representation of the work that went into these efforts, but this ‘cherry picking’ approach is intended to suffice in providing context for the ‘evolutionary steps’ described below.


FISMA is really the start. It’s the legislative seed that germinates into a variety of programs, requirements and ultimately, budget. In a very loose sense, it’s followed by the development and implementation of iPost at the state department. I think it’s safe to say that iPost was the first attempt to develop a truly multi-product security metrics framework. The development of SCAP, and then subsequently Cyberscope, aimed to provide consistency.

First, SCAP targeted consistency of assessment by converting prose standards into machine-readable XML. While that was certainly important and useful, its success drove the need for a way to consolidate and track progress; enter Cyberscope.

2With SCAP and Cyberscope in place, assessment frequency became the focus, moving from what started as three-year-assessments, driven by the need for an Authority to Operate (ATO), to annual assessments, to monthly and now towards a 72-hour cycle with the Continuous Diagnostics and Mitigation (CDM) requirements.

And that brings us current, leaving us with this image of what CDM aims to accomplish. To be clear, there’s $6B at play here. That’s billion with a B. This is the process for CDM outlined by the Department of Homeland Security.

There are two key changes in a practical sense with the move from continuous monitoring to CDM and they drive three evolutionary steps for the technology in the market.

First, we have the 72-hour assessment cycle and then we have the inclusion of steps 4 and 5. Until now, the focus for continuous monitoring has been the assessment activity itself. Agencies want to be ‘green’ with regards to the data they report into Cyberscope and they’re measured in this capacity by the Office of Management and Budget.

CDM starts to drive towards not only more frequent assessments, but also towards taking action to improve and reduce risk. The shift in requirements is important because not all compliant tools are equivalent.

Compliance ≠ Security

I like the metaphor of lawn mowing. The existing focus on assessment has created a compliance requirement around performing assessments. That, in turn, drives a need for tools that deliver on that requirement.

Imagine that you have a requirement to own a lawn mower. Being a fiscally prudent homeowner, you want the lowest cost tool to comply with that requirement. That might very well be a pair of nice, sharp scissors. If, however, that requirement shifts to actually mowing the lawn, you might decide that an investment in a more efficient lawnmower is warranted.

The lesson here is that compliance is not equivalent with security. You can maintain compliance (own a lawn mower) without actually improving security (mowing the lawn). In fact, if you know that the requirement is going to change, you might be able to make a more informed choice to start with.

That brings us to the three evolutionary steps for CDM technology:

  1. Assessment Frequency
  2. Data Reconciliation
  3. Risk Scoring and Prioritization

Assessment Frequency

3Let’s start by imagining that you’re running scans at some interval. In those scans, you capture some number of assets in your environment, but the assets that appear and disappear between scans aren’t accounted for. You can apply this logic not only to assets, but also to other changes in the environment. The wider the gap between scans, the more you are potentially missing.

And the remedy is fairly straight-forward: move to more continuous assessment. There’s a very reasonable conclusion that you ultimately want a real-time assessment option here, but for the time being, nearly continuous scanning will suffice. As you ramp up the scan frequency, you gather the data required, but you also produce a new problem: data volume. This leads us to the second evolutionary step: data reconciliation.

Data Reconciliation

The challenge with data reconciliation starts with a classic ‘pick two’ problem. You might be familiar with the ‘good, cheap, fast; pick two’ trope. In this case, it’s ‘current, complete, accurate.’

4To take an action, you need to determine which host and vulnerability to fix. If you have scan data to pull from, this is the point at which you attempt to create a report that is current, complete and accurate. It’s also the point at which you’re likely to find frustration with current tools.

If you start with current data, the first response is to pull the results from the most current scan. That option, however, will miss any assets or elements that weren’t captured in that one scan, i.e. it’s an incomplete data set. If you then aim to create a complete data set by aggregating results from multiple scans, you very quickly end up with inaccurate results due to overly simplified aggregation methods where you might get duplicate assets or vulnerabilities that have already been patched.

You can correct for the inaccuracies by pulling the scan results from the past scans directly, but you’re sacrificing currency to do so. In the end, you could manually determine which scans contain which assets and manually aggregate the data to achieve all three, but that’s hardly a sustainable strategy.

To evolve past this ‘pick two’ problem, the tools must aggregate results with some intelligence, understanding that host presence and findings discovered can’t be treated the same way, as well as tracking hosts across multiple scans for consistency. If you manage to do this, manually or automatically, you end up with a set of actionable data, but you’re still lacking enough data to actually take action.

Risk Scoring and Prioritization

CDM includes a substantial emphasis on hierarchical reporting of assessment and progress into a dashboard. In most cases, a CVSS or CVSS-like metric is being used for these purposes and it’s a good thing. To effectively compare across disparate groups, a standard and agreed upon metric is a requirement.

However, asking questions about progress is different from asking questions about what to fix today and different metrics can serve distinct purposes. Any metric that contains limits or bounds creates the potential for a clustering problem.

You can illustrate the difference between progress metrics and tactical metrics by asking the question, “Which x is the most x?,” where x is your upper bound. Which high is the most high? Which 10 is the most 10?

These types of bounded metrics are very, very useful for talking about risk outside of information security, but they don’t facilitate granular mitigation decisions. They are directionally accurate in aggregate, but not specifically distinct individually.

Finally, an important ingredient in making tactical risk mitigation decisions, i.e. what should I fix today, is including appropriate organizational context. Given a set of host with equal vulnerability risk, the choice for where to take action can be substantially more effective by including business context.

What does context mean in this case? Consider a host that contains sensitive data vs. one that doesn’t. Consider an asset that provides a service to the public vs. one that’s purely internal. These considerations make a material difference not for compliance with process, but for actual risk reduction.


We’ve seen tremendous progress in securing our government’s information technology infrastructure since FISMA was introduced in 2002. The threat environment continues to change, however, and so must the requirements for information security.

Through strong partnership between vendors, integrators and the government, we can adapt tools and techniques as effectively as possible. The three evolutionary steps above are underway now, but Phase 2 of CDM is on its way and will bring new challenges for vendors and government alike.

Related Articles:


picCheck out Tripwire SecureScan™, a free, cloud-based vulnerability management service  for up to 100 Internet Protocol (IP) addresses on internal networks. This new tool makes vulnerability management easily accessible to small and medium-sized businesses that may not have the resources for enterprise-grade security technology – and it detects the Heartbleed vulnerability.

picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].

Title image courtesy of ShutterStock