In this business of security, one of the most common challenges is that it’s hard to know where you stand, let alone compare today’s stand to yesterday’s. We just about learn how to measure, manage and address one type of problem (say malware) and another appears (say botnets). In addition, we want to know that we’re improving the security posture of our teams, business units and companies over time, and to do that we want to have some kind of baseline. We’ve all been taught that what we measure is what we manage; so we start building up numbers. And this is where things can go horribly wrong.
In fact, the real questions we’re trying to answer should be the ones that continuously drive and evolve what we use internally to assign our chosen quantitative risk numbers.
- How secure am I?
- Am I better off this year than I was last year?
- Am I spending the right $$$?
- How do I compare to my peers?
We all know that it’s easier to “measure” goodness or badness if they are expressed in a number, but like so many other things in security; the number is an attempt to put a quantitative label on a qualitative sense that is usually negotiated internally. So, while we often talk about how Risk=Threat*Vulnerability*Expected Loss; and rely on things like CVSS to create a Vulnerability score, so that this formula will create some sort of absolute number; most of the really important conversations are relegated to subtext in this equation.
A key point to remember is that metrics are just like statistics; they’re just indicators and can be used wrongly or misleadingly.
Example: If I’m a software company; and I’m trying to prioritize the use of my limited resources to secure the most important things, of course I’m going to want to use some kind of scoring to separate the wheat from the chaff. Like every other company, I want good metrics, I want them to be cheap to gather, with a specific unit of measurement which will be expressed as a number, so I can plug it into any organizational calculations I want to make. Hopefully, as we go on, the conversation about what I value as a company will drive more, and better metrics.
Maybe I can use some nifty devices in IT to prove that we see about 100 malware infections a month; so I can rate how likely it is that one of those will get successful. This fits my criteria – AV products track infection rates over time; and do this automatically. (For this or any other numbers in this article, I’m picking nice round numbers, don’t read into it!)Maybe I’ll use annual loss expectancy (ALE), because I like that model better than FAIR or OCTAVE, or others; and use those 100 malware infections I found earlier to calculate the probability of loss in this case.
However, now I’m measuring loss in dollars using ALE, and I’ve coupled a specific threat source of malware, which we all know is not at all the sum of security. Not only am I at risk of only using malware or AV coverage as a proxy for security, I have not necessarily captured less likely, but possibly more costly events. Also, as a security software company, is malware or AV coverage symptomatic of the most important things in my industry? No, but it’s still better than nothing.
Also, in all this risk management, there’s variability we can’t capture. To use a single example (there are many more), I stated I work in a security software industry. The statistical probability of a Hactivist group breaching my organization and sharing all my source code is low; which means that the odds are that it won’t factor high in my list of quantified risks. But if it did happen, the impact would likely be huge. Worse than that, even if I could account for those one off situations; there’s a non-negligible impact I might incur from the way my organization reacts to that one off event. Do we have an incident plan in place? Do we execute on it well? Does the market agree that we did well in this situation? All of those could have a real and literal impact (secondary loss) on the cost of the event, which it is unlikely that I could have ever captured quantitatively.
Does that mean the whole exercise was futile? Absolutely not! The most important thing in measuring and talking about risk is to get started. Every single risk that you identify, discuss, build a common understanding of and quantify is a new piece of data, knowledge and approach that builds a stronger security framework internally. Maybe our hactivist item didn’t rise up the list because of its low order of probability (or what we could call an unsystematic risk), but if we updated our incident plan to accommodate that possibility, if we have someone read up on the current targets of the hactivist community, then we have actually improved our security posture, subtly but surely. That’s the heart of actually doing risk management – somehow taking a deliberate action to move the odds more in your favor.
The other danger in starting to measure is that there is so very much data to get lost in and may or may not have any real impact on our risk tolerance as an organization. This is both an easy problem and a hard problem. If the purpose of these numbers we are gathering is simply to facilitate answering the questions I posed at the top, then any numbers that don’t relate to decision making and supporting those questions is irrelevant. Or is it? Sometimes it is actually useful to try and run a different perspective on a set of numbers to validate your understanding of the original numbers, or even to test that the control is working as well as you think it is. (If all my antivirus scans say I’m 100% malware free that’s great. If my AV is only on 30% of the time, that 100% has a lot less value given the additional context.) This brings up a 5th question I think is important in any metrics you plan to measure. “So what?”
No metric, either alone or in a herd, will be successful without a narrative. A story that tells what the numbers are, what the impact would be and why that particular person should be invested in this story. That narrative should be clear, simple, to the point, and ideally illustrated. So, what should you get out of this blog post on measuring? Measuring is good, measuring to answer your important questions is better, and recognition that the act of measuring itself can be used as a form of a security control that helps your organization improve over time.
If you have some stories to share about how metrics helped or harmed you, I’d love to hear them, drop me a line. Also, if you have some good resources that helped you avoid any pitfalls, I’m sure others would love to hear them!