Today’s post is all about Control 4 of the CSIS 20 Critical Security Controls – Continuous Vulnerability Assessment and Remediation (the last post pertained to Control 3; a list of Controls covered to date is provided near bottom of the post).  Here I’ll explore the (40) requirements I’ve parsed out of the control (I used the PDF version, but the online version is here) and offer my thoughts on what I’ve found [*].

Key Take Aways

  1. Operational Maturity. Perhaps it is because the vulnerability/patch cycle has been around for so long, but I found this Control to be somewhat different than the others I have so far examined. It’s different in that it seems more focused on the time it takes to accomplish specific tasks more than it is on the quantity of the specific results. For example, this Control wants you to measure how quickly you’re applying available patches, and does not care how many you’ve applied.  Another example, this Control wants you to prioritize application of patches based on vulnerability criticality without concern for how many there might be. This Control, in other words, is all about the process of continuous vulnerability management.  I see other controls leaning in this direction in the very near future – the efficiency of security processes are what’s most important, and they can always be improved over time with increasingly demanding standards/benchmarks.
  2. Interoperability. This Control is no different from any of the others in that it really is part of the overall framework’s intricate web.  The three most obvious points of integration are with the asset management, alerting, and ticketing systems. Somewhat less obvious, but no less important, are integration opportunities with LDAP for user roles and the relationship of vulnerability management with configuration management. These points of interoperability are not always explicitly mentioned, but are critically important to the security automation story we would like to tell in the future.
  3. Coverage.  One of the concepts covered well by this Control is that it leans quite heavily on ensuring that you’ve covered your enterprise. At more than one point, the requirements explicitly state that integration with the asset inventory system is important. As you’re looking for scanning tools, be sure to have a list of all software asset classes covered straight out of your asset inventory system. This will help you in your evaluation to ensure that you have adequate coverage of your enterprise.

Potential Areas Of Improvement

  • Provide more explanation. At times, the requirements are not obvious – even to security professionals. Consider what it must be like from the organizational, non-security perspective to read some of these requirements. You want to track or trend a particular metric because it provides some insight to you, but you don’t really know what that insight is. This Control, as with others in the framework, would bode well to provide further explanation in such cases. If the reason for doing work is not clearly articulated, that work will not be supported by the organization.
  • Categorize requirements more appropriately. This might be somewhat of a nit, but I found a couple of requirements describing metrics that were not in the “metrics” section of the framework. This may simply be an oversight, but it’s still something that could be corrected. If I’m moving quickly or if I’m only interested in the prescribed metrics for a given control, I would miss those that are inappropriately categorized.
  • General housekeeping. There are a few things that I would change, but nothing critical. Some of the requirements should probably be reworded (one in particular talks about patches when I think it would be far better to talk about vulnerabilities), and others can be safely omitted.

Requirement Listing

  1. Description: Run automated vulnerability scanning tools against all systems on their networks on a weekly or more frequent basis using a SCAP-validated vulnerability scanner that looks for both code-based vulnerabilities (CVE) and configuration-based vulnerabilities (CCE).
    • Notes:I’m going to guess that most enterprises do not have SCAP-validated scanners in their shop at this point. Not because SCAP isn’t any good – it is good – but because it has fallen short in terms of available content. The point here, is that using SCAP-validated scanners should enable you to take vulnerability scanning content from multiple sources as it is released. That means you can react faster. That means you shrink the adversary’s window of opportunity. That’s a good thing.
  2. Description: Where feasible, vulnerability scanning should occur on a daily basis using an up-to-date vulnerability scanning tool.
    • Notes:  Even better would be to have real-time vulnerability scanning enabled where it is available. The fact of the matter is that a daily scan is probably good enough, but for critical systems, having real-time vulnerability detection enabled is just that much better. Look for tools that have this capability.
  3. Description: Any vulnerability identified should be remediated in a timely manner, with critical vulnerabilities fixed within 48 hours.
    • Notes:  Here is a point where the entire vulnerability management system comes into play. Fixing vulnerabilities, especially in the face of a CCB, is not likely to be fully automatic, but may be automated with specific human touch points. Look for tools that are capable of easily (or even out of the box) integrating with your ticketing and change management systems, then that which is able to automate the fix.
  4. Description: Event logs should be correlated with information from vulnerability scans to fulfill two goals. First, personnel should verify that the activity of the regular vulnerability scanning tools themselves is logged. Second, personnel should be able to correlate attack detection events with earlier vulnerability scanning results to determine whether the given exploit was used against a target known to be vulnerable.
    • Notes:  This seems like an out of place requirement – it’s veryfocused on SIEM and audit-logger.. It’s not unreasonable, but is yet another indication of the different ways these controls interact – it can be complicated.
  5. Description: Utilize a dedicated account for authenticated vulnerability scans.
    • Notes:  This is simply good practice. If you use the dedicated account it’ll be easier to 1) lock it down and 2) correlate on what is actually doing the vulnerability scanning. Again, this seems to be something that touches a system or process described in another control.
  6. Description: The scanning account should not be used for any other administrative activities and tied to specific IP addresses.
    • Notes:  This is, in part, the lockdown I noted for the previous requirement. I’m sure there are other things you can do to lock down the account, and this is where a good benchmark comes into play – take a look at Center for Internet Security or DISA sources for recommendations.
  7. Description: Ensure only authorized employees have access to the vulnerability management user interface and that roles are applied to each user.
    • Notes:  I really enjoy reading this requirement. It’s one that explicitly recognizes that the tools used to enforce technical security controls are, themselves, subject to security controls. This is not always the case in other control frameworks, or even in other controls here – the fact is always alluded to or left to be inferred by the reader. That said, recognize that you need to keep a list of authorized users for your vulnerability management system and that list should be role-based. Here’s another point of interoperability that would be nice to see – LDAP integration might work here.
  8. Description: Subscribe to vulnerability intelligence services in order to stay aware of emerging exposures.
    • Notes:  Typically, your vulnerability management solution will either include or offer further services for such a subscription. But, your vendor is not the only source of vulnerability information, and you should not necessarily rely on them exclusively. Depending upon your specific enterprise needs, it may be advantageous to source vulnerabilities from several locations to ensure maximum vulnerability coverage. A simple Google search for “vulnerability intelligence sources” or “vulnerability intelligence service” turns up plenty of options. The challenge, of course, is in ensuring that the vulnerability descriptions you receive are both human and machine readable, and that the machine readable format is something that your particular tool understands.
  9. Description: Deploy automated patch management tools and software update tools for operating system and software/applications on all systems for which such tools are available and safe.
    • Notes:  This requirement is simply stating that you need to ensure coverage for all classes of software in your asset inventory. The tricky part that I see right away is in-house and/or custom applications/integrations. How are your patches for these types of systems going to be automated? I understand that the requirement is worded in a way that allows for some non-automated patching, but it seems that, over time, we (as an industry) ought to be striving for standardizing patch management to the point where in-house and/or custom applications can be included in automation.
  10. Description: “Patches should be applied to all systems, even systems that are properly air gapped.”
    • Notes:  To me, this requirement is unnecessary, but it might belie a prevalent perception in practice where “logic” would dictate that if a system is air gapped, then it’s vulnerabilites are not exploitable. This “logic,” of course, ignores the potential for inside jobs, or even the use of insiders as an unknowing vector.
  11. Description: Carefully monitor logs associated with any scanning activity and associated administrator accounts to ensure that all scanning activity and associated access via the privileged account is limited to the timeframes of legitimate scans.
    • Notes:  I’m not exactly sure why this audit logging requirement is placed in this control other than to be as explicit as possible with respect to ensuring that technical security controls are also subject to security controls. In my mind, this is a requirement better left to another control – that which is concerned with audit logging.
  12. Description: In addition to unauthenticated vulnerability scanning, organizations should ensure that all vulnerability scanning is performed in authenticated mode either with agents running locally on each end system to analyze the security configuration or with remote scanners that are given administrative rights on the system being tested.
    • Notes:  A lot of this requirement is fluff. It would be enough to simply state that both authenticated and unauthenticated vulnerability scanners should be leveraged by an enterprise. How that scanner gets the job done is not something that belongs in any control framework. This requirement is simply recognizing that some vulnerabilities will go undetected without authentication to the system.
  13. Description: Compare the results from back-to-back vulnerability scans to verify that vulnerabilities were addressed either by patching, implementing a compensating control, or documenting and accepting a reasonable business risk.
    • Notes:  This requirement is fairly straightforward, but it contains what I find to be an interesting wrinkle – compensating controls. The vulnerability scans themselves will not understand that you’ve compensated for the control in some way (at least, not to my knowledge). You’re going to need to track this outside of your vulnerability management tool by way of exception, waiver, risk acceptance or compensating control. Additionally, and perhaps more problematic, is the reliance on “risk.” This takes some level of assessment and a good understanding of how a particular software vulnerability may impact one or more business processes. Do you have that kind of granularity in your security program? Are you able to review a list of vulnerabilities on a given system and say, “yes, if this vulnerability is successfully exploited, then I’m going to be down for up to x number of days which will cost the company y dollars in revenue and z dollars for recovery?”
  14. Description: Such acceptance of business risks for existing vulnerabilities should be periodically reviewed to determine if newer compensating controls or subsequent patches can address vulnerabilities that were previously accepted, or if conditions have changed increasing the risk.
    • Notes:  This requirement seems to be speaking of tracking the vulnerabilitiy excpetions, mitigations, acceptances. There is no guidance provided with respect to the periodicity in this requirement. It seems to me that once per patch cycle is adequate, if possible.
  15. Description: Vulnerability scanning tools should be tuned to compare services that are listening on each machine against a list of authorized services.
    • Notes:  This is a gray area in my opinion. Yes, we want to detect when unauthorized software is listening on an open port. But, where we seem to have been confined to the context of software vulnerabilities we are now expanding the context to include something that should be covered by configuration monitoring. That said, I don’t see much wrong with the idea of covering this particularly important base with more than one technical control.
  16. Description: The tools should be further tuned to identify changes over time on systems for both authorized and unauthorized services.
      • Notes:

    Should a vulnerability management tool be held accountable for tracking changes to system configuration over time? This, too, seems like a gray area, where the requirement is a blend of configuration, change, and vulnerability management relying heavily on asset management. This particular requirement exemplifies where additional explanation would go a long way – how would this information be used? What does it characterize? Why is that characterization important? It’s almost as if control frameworks are written more for security professionals than business professionals, which I can understand, but with which I do not fully agree.

  17. Description: Measure the delay in patching new vulnerabilities and ensure that the delay is equal to or less than the benchmarks set forth by the organization.
    • Notes:  Now we’re getting to the operational aspect of things, which is right on track for where we need to go. What benchmarks has your organization chosen? Sometimes these will be forced upon you, a la PCI, and other times you’ll just pick something to use, like CIS Metrics. Either way, you need to have some standard in place – the benchmark – and procedures to follow, and this particular metric (this really ought to be in the metrics section).
  18. Description: Alternative countermeasures should be considered if patches are not available.
    • Notes:  Great. Alternative countermeasures should be considered. This is, in effect, a non-requirement. If a patch for a given vulnerability is not available, then the risk presented by that vulnerability ought to be addressed in one of the standard ways – in fact, this has already been alluded to by requiring a vulnerability tracking process. If you’re tracking, then you’re considering countermeasures. I would remove this requirement.
  19. Description: Critical patches must be evaluated in a test environment before being pushed into production on enterprise systems.
    • Notes:  I’m going to assume that a “critical patch” is one addressing one or more “critical vulnerabilities.” This is an operational requirement that demands, if followed, that you have a test envionrment which mimics your production environment. This is fairly straightforward. Note that this does not mean you can’t have an unpatched environment, it just means that where you’re going to be performing patching on critical systems, then you’d be better off testing the patch beforehand.
  20. Description: If such patches break critical business applications on test machines, the organization must devise other mitigating controls that block exploitation on systems where the patch cannot be deployed because of its impact on business functionality.
    • Notes:  This is common sense from a security perspective, but may be unreasonable from a business perspective. This control is prescribing risk treatment by specifying “mitigating controls” to “block exploitation.” While this may be true in practice – who wouldn’t want to mitigate a vulnerability on a critical system? – it is still something that seems over-prescriptive for a control framework.
  21. Description: Address the most damaging vulnerabilities first.
    • Notes:  This sounds great. What does “damaging” mean in this context?
  22. Description: Prioritize the vulnerable assets based on both the technical and organization-specific business risks.
    • Notes:  This should, if you have an outstanding asset management program, be a fairly easy thing to do. In fact, it should be something that can be largely automated.
  23. Description: An industry-wide or corporate-wide vulnerability ranking may be inadequate to prioritize which specific assets to address first.
    • Notes:  This “requirement” needs to be reworded to something stronger. I believe what it’s trying to convey is: Be sure to assess each vulnerability in the context of your organization before prioritizing your assets. The way this requirement is presently worded makes it seem like an explanation for the real, presently phantom, requirement.
  24. Description: A phased rollout can be used to minimize the impact to the organization.
    • Notes:  This is an operational facet that, in my opinion, shouldn’t be in any one control. A phased rollout can apply to a variety of controls, and is something that can certainly help a security manager align with the organization more easily. This is important to convey, but I believe it’s buried in the wrong place and should be brought to the surface above the controls proper.
  25. Description: To help standardize the definitions of discovered vulnerabilities in multiple departments of an organization or even across organizations, it is preferable to use vulnerability scanning tools that measure security flaws and map them to vulnerabilities and issues categorized using one or more of the following industry-recognized vulnerability, configuration, and platform classification schemes and languages: CVE, CCE, OVAL, CPE, CVSS, and/or XCCDF.
    • Notes:  I’ve already recognized that “Procedures and Tools” are not necessarily strict requirements, but they are a source to consider. Aside from encouraging standardization leverage, this particular requirement throws in configurations with software vulnerabilities, which belies the tie between the two disciplines of configuration management and vulnerabiltiy management. This is further evidence substantiating the claim that SCM should include vulnerability management.
  26. Description: The frequency of scanning activities, however, should increase as the diversity of an organization’s systems increases to account for the varying patch cycles of each vendor.
    • Notes:  Exception reviews may be heavily impacted by this correct observation. If all of your vendors are on different patch cycles (ad hoc, weekly, monthly, quarterly), then your review process is best applied on those schedules, so make it as flexible and efficient as possible. You might consider requirements – or characteristics – of your review process before you go about defining it.
  27. Description: In addition to the scanning tools that check for vulnerabilities and misconfigurations across the network, various free and commercial tools can evaluate security settings and configurations of local machines on which they are installed. Such tools can provide fine-grained insight into unauthorized changes in configuration or the inadvertent introduction of security weaknesses by administrators.
    • Notes:  I am, again, confused as to why misconfigurations are included here. I really am coming to believe (though not necessarily with enough supporting argument) that the term “vulnerability” needs to be explicitly considered more broadly than it is today to include misconfigurations and even process, procedure, and benchmark flaws. The suggestion provided by this pseudo-requirement is to use free and/or commercial tools to evaluate security settings on local machines (I assume “on the box” or “on the terminal”). The assertion is that doing so can yeild insights. What insight? Give an example or two. Also, given the previous recommendation of using both authenticated and unauthenticated scanners, it seems that this requirement is moot.
  28. Description: Effective organizations link their vulnerability scanners with problem-ticketing systems that automatically monitor and report progress on fixing problems, and that make unmitigated critical vulnerabilities visible to higher levels of management to ensure the problems are solved.
    • Notes:  The way this requirement is worded has some fairly far-reaching consequences. If you’re an effective organization, you’ve linked your vulnerability management system with your ticketing system. Ok, that makes perfect sense (but requires interoperability for which we have no standards today, so it’s going to be expensive). More concerning is the implication that ticketing systems should understand the criticality of the request and that the request pertains to a vulnerability. I think this is a reach. Criticality is one thing, but to expect a ticketing system to understand that it’s a vulnerablity is another. Perhaps ticketing systems can be configured so, but I’m thinking in terms of some day getting to “out of the box” integration between security tools and those other operational tools upon which they are expected to rely.
  29. Description: The most effective vulnerability scanning tools compare the results of the current scan with previous scans to determine how the vulnerabilities in the environment have changed over time. Security personnel use these features to conduct vulnerability trending from month to month.
    • Notes:  This is a restatment of a previous requirement that asks for examination of back-to-back scans. Perhaps this is an attempt to provide some explanation for that previous requirement, but I would still want more explanation – especially around the trends. What are you trending? Age?
  30. Description: As vulnerabilities related to unpatched systems are discovered by scanning tools, security personnel should determine and document the amount of time that elapses between the public release of a patch for the system and the occurrence of the vulnerability scan.
    • Notes:  This seems like a metric to me, and not necessarily a bad one, but I think it’s misworded. I would prefer to see tracking of the time from public announcement of the vulnerability (which often coincides with a corresponding patch) to an applicable scan for that vulnerability. This metric can help determine how well you keep up with situational awareness – how long are you, on average, without awareness of critical vulnerabilities? (As one example.)
  31. Description: If this time window exceeds the organization’s benchmarks for deployment of the given patch’s criticality level, security personnel should note the delay and determine if a deviation was formally documented for the system and its patch. If not, the security team should work with management to improve the patching process.
    • Notes:  This is discontiguous from the previous requirement, though I understand the spirit of the request. The previous measurement is from public announcment to scan, not to application of an available patch. An additional wrinkle would be thrown into this requirement if the above requriement were applied to “vulnerability” and not confined to “patch.” The spirit of this requirement is really for the organization to have some standard for their time to patch and to hold itself accountable to that standard.
  32. Description: All patch checks should reconcile system patches with a list of patches each vendor has announced on its website.
    • Notes:  This seems very manual and one that requires that general software vendors cooperate. The biggest vendors might have vulnerability and patch information available on their site, but not all will. I suppose that’s why this requirement is a “should” and not a “must.” Still, a different way seems reasonable – perhaps checking from a variety of sources woudl be an equivalent option.
  33. Description: All machines identified by the asset inventory system associated with Critical Control 1 must be scanned for vulnerabilities.
    • Notes:  I like the spirit of this metric, but it seems incomplete. I believe it means to say that you need to prove that you’re covering all of your assets, but it does not explicitly state that your assets include the software installed on those machines, even though a previous requirement recognizes that not all software can be covered by vulnerability management.
  34. Description: Additionally, if the vulnerability scanner identifies any devices not included in the asset inventory, it must alert or send e-mail to enterprise administrative personnel within 24 hours.
    • Notes:  This, again, requires that your vulnerability management system is tied in some way to the alerting system, and not necessarily in support of the vulnerability management process, but in support of asset management – specifically, in discovery (and likely discovering unauthorized assets). I wouldn’t look forward to this metric, however, until I had a solid asset management process in place.
  35. Description: The system must be able to alert or e-mail enterprise administrative personnel within one hour of weekly or daily automated vulnerability scans being completed
    • Notes:  This is a simple metric. Just be sure that your tools can integrate with an, or provide their own, alerting system. Also, you’re going to want to ensure that appropriate administrative personnel are notified.
  36. Description: If a scan cannot be completed successfully, the system must alert or send e-mail to administrative personnel within one hour indicating that the scan has not completed successfully.
    • Notes:  This is an especially important metric when it comes to critical vulnerabilities. If you can’t scan a system, there’s something wrong to begin with. If you can’t scan the system to determine whether you’re vulnerable to a zero-day with used exploits in the wild, then you’ve got a real problem. I would like to see this as being less than one hour – like right after the scan attempt.
  37. Description: Every 24 hours after that point, the system must alert or send e-mail about the status of uncompleted scans, until normal scanning resumes.
    • Notes:  This nag is useful, I think, as it is with the other controls. Of course, at this point, it’s getting old to reread the requirement, and this seems like something that could be pulled out of each of the controls and specified at a more abstract level.
  38. Description: Automated patch management tools must alert or send e-mail to administrative personnel within 24 hours of the successful installation of new patches.
    • Notes:  I’m not sure that I agree with this metric. It might be an option, but I would much rather prefer a system that lets me know when it devates from the expected.
  39. Description: The evaluation team must verify that scanning tools have successfully completed their weekly or daily scans for the previous 30 cycles of scanning by reviewing archived alerts and reports to ensure that the scan was completed.
    • Notes:  None.
  40. Description: If a scan could not be completed in that timeframe, the evaluation team must verify that an alert or e-mail was generated indicating that the scan did not finish.
    • Notes:  None.

Other Controls Reviewed In This Series

Footnotes

A method and format explanation can be found at the beginning of Control 1.

Editor’s Note: This article was written by a former contributor to The State of Security who now resides with a non-profit group with an excellent reputation. We thank him for his opinions and perspective, and wish we could acknowledge him directly for his outstanding efforts on this series.

Categories , , IT Security and Data Protection,

Tags , , , ,


Leave a Reply

Cindy Valladares

Cindy Valladares has contributed 147 posts to The State of Security.

View all posts by Cindy Valladares >