Skip to content ↓ | Skip to navigation ↓

Photo of fingers crossed behind back

Security is a complex, often nuanced, topic. Today there’s a lot of subjectivity in 100% security oriented discussions. Business people like non-squidgy objective numbers. To make security investment decisions, security people have to sell their area to the business; which means speaking their language. As a consequence, security people are often trying to make objective numbers that somewhat reflect subjective perceptions.

A specific example of how this happens is in prioritizing vulnerabilities. Every company faces the decisions about what they are willing to spend money to address vs. deal with the risk of not addressing it. The common standard for doing this is the Common Vulnerability Scoring System (CVSS). In fact, NIST publishes a calculator (tied to the National Vulnerability Database) to facilitate users getting that final number. The outcome of doing CVSS is a single, presumably objective, score. Or is it?

If you dig into the actual inputs to the scoring system, there are some areas where we’re utilizing “expert opinion” to drive that score. By definition expert opinion is subjective – it’s opinion! A specific example of this is the “Organization specific potential for loss (CollateralDamagePotential)” modifier. Let’s theorize that there is a vulnerability that might allow remote access by non-authenticated users, on server platforms. What is the corporate potential for loss? Well, we know it is server platforms. We know that out of an organization, most distributions of computers means that the majority of systems are client systems. So, this won’t have a high score in how many systems it impacts out of the overall organization. It has high exploitability; which could have high impact. But the key word in that sentence is “could”.

All the modifier sections are about enabling the security person to walk through what “could” happen. Mentally they are thinking about what is the probability this could happen? If this vulnerability, with its high exploitability did happen, what is the best to worst possible range of impacts? What is most likely? Does the attack transform into a breach as some point? If it did happen, what is the range of loss? What is probable there? All of those are subjective questions, which drive those modifier scores that contribute to the overall score.

Historically, there have been 3 key pillars of security (Confidentiality, Integrity & Availability). Those are reflected in the CVSS calculator. Even those are often considered qualitative however. So at the end of the day, while the business uses the number to rank the priority of addressing vulnerabilities; those numbers were completely driven by subjective, expert opinion.

This is an opportunity for all of us. For the security industry as a whole, our continued effort to find ways to create objective measurements is an area of investment that continues to be important. Inside organizations, the opportunity is to surface a valuable conversation. What is the set of assumptions that underlie the expert opinion? Can we organizationally achieve consensus on the viewpoints that drive that expert opinion? What does it mean when the impacts in reality diverge from the plan? How can we consider that a positive experience that will help us drive better data models?

If you are having these more nuanced conversations, what tips and tricks can you share? What has it changed in your approach to prioritizing and funding security? I’d love to hear, and I’m sure others would benefit as well.