A true zero day, such as the recent vulnerability affecting Apple’s DYLD_PRINT_TO_FILE variable that an adware installer is said to be exploiting in the wild, is called that because it comes without warning, because by the time you know about it, you have already been compromised. They’re expensive; they are the domain of nation states and the most advanced of APTs. Chances are, if you have anything to do with defending a business, you have other things to worry about. A four year old vulnerability with a Metasploit module. An AIX box you can’t upgrade that’s running a critical process. Maybe you don’t know what assets are in a particular office or IP block.
These are real issues. These are high risk, high probability, easy ways to get compromised that you know about, I know about, and chances are your attackers know about.
But in reality, these real issues are not so scary. We can inventory our assets, perform a couple of scans, correlate the data, find out if the vulnerability has an exploit or is being exploited on the internet, and decide if it’s worth fixing today. In other words, we can manage them. It’s hard, but it’s doable.
So why of all things are we talking about zero days today?
Because they’re a symbol of the inherent assumptions and failures that our vulnerability management systems make. They are Rumpsfieldian unknown unknowns. In terms of business risk and business process, they are the impossible.
Imagine tomorrow I handed your best engineer a thumb drive with two mythical unicorns of JSON files on it.
The first unicorn is an entry for every CVE that could possibly have come back with a scan. It comes equipped with 100% accurate data about exploit availability, how often and when the vulnerabilities get exploited, and if the vulnerability is a target of interest for attackers.
Armed with the magical powers of this data, a good engineer could tell you exactly which of your vulnerabilities to fix, what the timelines should be based on what’s being attacked, and what’s okay to ignore. You’d fire off tickets to your dev teams, and, with some luck, in about ten years you’d be done.
But zero day vulnerabilities point out the assumption in our entire security process. We assume we know which vulnerabilities are out there, and when we don’t we wait for a CVE to come out, a scanner to pick them up, and/or our engineers to find out information about them.
That’s why the thumb drive has two JSON files on it. The second is a list of every single zero day in the world, everything that’s been discovered and could be used by Russia, the NSA, China, what have you. In fact, some folks already buy this feed.
And here I come to the critical point. You wouldn’t have a clue what to do with it.
What assets are affected? Can you correlate this to your vulnerabiity scan? There’s no signature for the CVE to be scanned, even if the scan is authenticated. How do you prioritize between a zero day and a known CVE with a CVSS score of 7?
There’s a way to do it; it’s just _hard_. More than hard, it requires you to change the speed of your security practice. If you’re really good, you remediate at a rate close to or faster than vulnerability disclosure and discovery.
Addressing risks from zero days requires you to change your security practice to be _faster_ than the rate of disclosure. It requires you to do research faster than MITRE or NVD and to correlate to assets based on imperfect data.
There are two reasons why it’s hard to deal with zero days, and these two reasons are also, in my opinion, the two most important metrics in vulnerability management.
The first, is the speed of your security operations. The faster you are, the easier it is to stay ahead of attackers, the more room you have for error, and the greater the impact is of your successful remediation. If you are able to deal with a vulnerability, from release to a quick finish, then you have a chance of staying ahead of your peers. If you can integrate zero days and be faster than the actual release dates of vulnerabilities, then you can stay ahead of actual threats.
The second is the centralization of decision making. If you’re looking at a vulnerability scan and making decisions about it, then looking at a second scanner and making those same decisions about it, and then a third engineer looks at a threat intel feed, raises his hand and shouts “THIS TOO IS IMPORTANT, “you are in disarray. Conversely, if all of your intel and vulnerability data is in one place, you can make and measure decisions about overall risk.
About the Author: Michael Roytman (@mroytman) is responsible for building out Kenna’s analytics functionality, and has been selected to speak at BSides, Metricon, SIRACon and more. His work at Kenna focuses on security metrics, risk measurement, and vulnerability management and his work has been published in USENIX. He formerly worked in fraud detection in the finance industry, and holds an M.S. in Operations Research from Georgia Tech. His home in Chicago contains a small fleet of broken-down drones.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.
Title image courtesy of ShutterStock