In the first installment of this series, we provided a general overview of the concept of continuous security monitoring (CSM), and in the second article we explained a little more about CSM and how it can help your organization react faster and better to an ever-evolving threat landscape. In this article, we will examine the challenges regarding obtaining full visibility into your environment.
Since you can never “get ahead of the threat” you need to react faster to what’s happening, which requires shortening the window of exposure with extensive security monitoring. We tipped our hats to both the PCI Council and the US government for requiring monitoring as a key aspect of their mandates.
The US government pushed it a step further by including ‘continuous’ in its definition. We love the term ‘continuous’, but it has caused a lot of confusion among folks responsible for monitoring.
As we are prone to do, it is time to wade through the hyperbole to define what we mean by Continuous Security Monitoring, and then identify some of the challenges you will face in moving towards this ideal.
We need not spend any time defining security monitoring — we have been writing about it for years. But we need to consider how continuous any monitoring really needs to be, given the current state-of-the-art in attack tactics. Many solutions claim to offer “continuous monitoring”, but all too many simply scan or otherwise assess devices every couple days — if that often.
Which would seem to be acceptable, given NIST’s official definition of Continuous Security Monitoring:
Information security continuous* monitoring (ISCM) is maintaining ongoing* awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.
*The terms “continuous” and “ongoing” in this context mean that security controls and organizational risks are assessed, analyzed and reported at a frequency sufficient to support risk-based security decisions as needed to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals. NIST 800-137 (PDF)
Wait, what? So to NIST ‘continuous’ doesn’t actually mean continuous, but instead “a frequency … needed to adequately protect organization information.”
Sorry, but no. We have heard all the excuses for why it is not practical to monitor everything continuously, including concerns about device resources consumption, excessive bandwidth usage, and inability to deal with an avalanche of alerts. All those issues ring hollow because intermittent assessment leaves a window of exposure for attackers, and for critical devices you don’t have that luxury.
Our definition of continuous is more in line with the dictionary definition:
con.tin.u.ous: adjective \k.n-‘tin-yue-.s\ — marked by uninterrupted extension in space, time, or sequence.
The key word there is uninterrupted: always active. The constructionist definition of continuous security monitoring should be that the devices in question are monitored at all times — there is no window where attackers (or internal operations people) can make a change adversely impacting security posture without it being immediately detected.
But we are neither constructionist nor religious — we take a realistic and pragmatic approach, which means accepting that not every organization can or should monitor all devices at all times.
We incorporate asset criticality into our concept of CSM. Some devices have access to very important stuff. You know, the stuff that if leaked will result in blood (likely yours and your team’s) flowing through the halls. The stuff that just cannot be compromised. Those devices need to be monitored continuously.
And then there is everything else. In that “everything else” bucket land all the other devices you need to monitor and assess, but not as urgently or frequently. For the sake of efficiency, you will monitor these devices periodically, so long as you have other methods to detect and identify compromised devices, such as network analytics/anomaly detection and aggressive egress filtering.
The secret to success at CSM is in choosing your high-criticality assets well, so we will get into that later. Another critical success factor is discovering when new devices appear, classifying them quickly, and getting them into the monitoring system quickly. This requires strong process and technology to ensure you have visibility into all your networks, can aggregate the data you need, and have sufficient computational analysis horsepower.
Adapting the Network Security Operations process map we published a few years back, here is our Continuous Security Monitoring Process:
The process is broken down into three phases. In the Plan phase you define policies, classify assets, and continuously discover new assets in your environment. In the Monitor phase you pull data from devices and other sources, to aggregate and eventually analyze, in order to alert if a potential attack or other situation of concern becomes apparent.
You will monitor not only to detect attacks, but also to confirm changes and identify unauthorized changes, and to substantiate compliance with organizational and regulatory standards (mandates). It’s critical here to make sure you manage the signal to noise ratio by effectively determining what will be monitored and when alerts will fire, as the system can stream alerts nonstop without careful tuning.
In the final phase you take action (determine what action to take, if any) by validating the alert and escalating as needed. As with all our process models, not all these activities will work or fit in your environment. We publish these maps to provide ideas about what you’ll need to do — they always require customization to your needs.
The Challenge of Full Visibility
As we mentioned above, the key challenge in CSM is classifying assets, but your ability to do so is directly related to the visibility of your environment. You cannot monitor or protect devices you don’t know about. So the key enabler for this entire CSM concept is an understanding of your network topology and the devices that connect to your networks.
By continuously analyzing your attack surface, the goal is to avoid an “oh crap” moment, when a bunch of unknown devices and/or applications show up — and you have no idea what they are, what they have access to, or whether they are steaming piles of malware.
There are a number of discovery techniques, including actively scanning your entire address space for devices and profiling what you find. That works well enough and is how most vulnerability management offerings handle discovery, so active discovery is one requirement.
But scanning a full address space scan can have a substantial network impact, and isn’t appropriate during peak traffic times. Be sure to search both your IPv4 and IPv6 address spaces. You don’t have IPv6, you say? You will need to confirm that — many devices have IPv6 turned on by default, broadcasting those addresses to potential attackers.
You should supplement active discovery with a passive discovery capability that monitors network traffic and identifies new devices, traffic to malicious sites, and unauthorized communications from their network communications.
Sophisticated passive analysis can also profile devices and identify vulnerabilities, but passive monitoring’s primary goal is to find new unmanaged devices faster, which then triggers a full active scan on identification. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments, which block active discovery and vulnerability scanning.
It is also important to visualize your network topology — a drill-down map is worth a million words. Being able to isolate a device, understand where it fits in your topology, and drill down into previous assessments, dramatically accelerates the process of discovering the root cause of issues during the validation and escalation phases.
Complicating factors for discovery include cloud computing and mobility. With the lack of control and visibility over devices outside the cozy confines of your network perimeter, figuring out which devices have access to critical data stores is increasingly difficult.
Cloud computing provides the ability to spin up and take down instances at will without human involvement — perhaps outside your data center. This clearly impacts visibility, so your discovery processes need to be integrated with your cloud consoles to ensure you know about and can assess newly-minted instances, applications and services.
Similarly, intelligent mobile devices with access to critical enterprise data create easy targets for attackers probing your network. So mobile devices need to be assessed on connection using network security controls to ensure they have an adequate security posture and access only to authorized data.
In the next article in the CSM series, we will examine how identifying your critical assets and monitoring them continuously is a key success factor for your security program… Stay Tuned!
Editor’s Note: This post is a series of excerpts from the Continuous Security Monitoring whitepaper developed by Mike Rothman of Securosis, and was developed independently and objectively using the Securosis Totally Transparent Research process. The entire paper is available here.
About the Author: Securosis Analyst/President Mike Rothman’s bold perspectives and irreverent style are invaluable as companies determine effective strategies to grapple with the dynamic security threatscape. Mike specializes in the sexy aspects of security — such as protecting networks and endpoints, security management, and compliance. Mike is one of the most sought-after speakers and commentators in the security business, and brings a deep background in information security. After 20 years in and around security, he’s one of the guys who “knows where the bodies are buried” in the space. Mike published The Pragmatic CSO in 2007 to introduce technically oriented security professionals to the nuances of what is required to be a senior security professional. He can be reached at mrothman (at) securosis (dot) com.
- Defining Continuous Diagnostics and Mitigation
- Managing the Complexity of the Attack Surface
- Prevention and Detection Strategies for Backdoors and Hardware Attacks
- Leveraging the Windows Registry in Digital Forensics Investigations
The Executive’s Guide to the Top 20 Critical Security Controls
Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].
Definitive Guide to Attack Surface Analytics
Also: Pre-register today for a complimentary hardcopy or e-copy of the forthcoming Definitive Guide™ to Attack Surface Analytics. You will also gain access to exclusive, unpublished content as it becomes available.
Title image courtesy of ShutterStock