Skip to content ↓ | Skip to navigation ↓

In recent weeks I have been struck again by the contradictions in our industry.

These are two striking recent examples which have one thing in common…risk. There is no consensus on the real level of cybersecurity risk we are living with, because data sets and ways to calculate risk vary from firm to firm.

Consider this hypothetical example by Jason Spaltro (Sony’s then executive director of information security) in a 2007 article:

A company relies on legacy systems to store and manage credit card transactions for its customers. The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,”

Ari Schwartz, a privacy expert at the Center for Democracy and Technology thought that short sighted;
“The cost of notification is only a small part of the potential cost to a company. Damage to the corporate brand can be significant. And if the FTC rules that the company was in any way negligent, it could face multimillion-dollar fines.”

So how do you avoid the situation that Sony currently finds its self in? How do you accurately assess potential loss and the likelihood of that loss occurring to make good security decisions? The holy grail: Unpicking the web of data to insure against uneconomic to mitigate risks and fix the rest.

Unfortunately cybersecurity risk is still almost impossible to reliably quantify. Some vendors and consultants claim it is easy to achieve – their way. But there’s one hard reality almost all businesses still live with;

Normally amber (and often immovable) cybersecurity bubbles on exec risk profiles that are of little practical use, because a guesstimate, +/- extra security, still equals a guesstimate.

This isn’t an attack on industry colleagues, it’s a simple statement of fact. You can’t accurately assess these risks with the limited data available.

That brings me to the reason for writing this. There is now light at the end of the tunnel. Someone has bitten the bullet and intelligently modeled rafts of risk data. Finally providing a realistic and sustainable way to scale, cost and compare security risks.

But first;

How did we get into this mess?

1. Strategic Risk Management

The reports your board get to see on risk are typically more or less informed guesstimates. Sitting up in the ether with titles like “data loss” “cyber attack” or “major IT outage” (probability on the x axis and monetary impact on the y). Added to and amended during top down risk assessment sessions. Approaches like FAIR produce a more accurate and realistically quantified high-level picture, by carefully honing local risk data. Typical finger in the air discussions produce….something.

That view is twinned with a report on control compliance, the latest picture from audit and your latest IT/Security problem and incident summary. How do these things fit together to guide SMART actions? Short answer – they usually don’t.

2. Operational level risk management

On the ground floor folk are beavering away. Potential and actual issues come in via the helpdesk, IT function, security function, data protection team, change assurance team, supplier assurance team, local risk team and audit folk. Filtered upwards and consolidated in various ways. Is there a centralised consistent way of categorising  assessing and recording these? Probably not.

3. Joining things together

Unless EVERYTHING is assessed in a standard way against a standard impact and probability scale, you almost certainly have a disjointed view of high versus low level risks. Few have risk benchmarks tested against real-world data on incident likelihood and impact. I’m willing to bet no-one has direct line of sight between new operational level security risks (missing patch, lost device, weak password) and how they affect exec risk profiles. Why? Because there is no linear connection.

All threat & vulnerability pairs have a large range of variables that can influence the level of risk they pose. The simplest way to illustrate that is by thinking about defense in depth. Take a weak password as an example;

  • Endpoint – weak logon password
  • Password expiry – frequent
  • Password used for single sign on to other applications/devices – no
  • User access to confidential data – limited
  • Physical access control – strong
  • Network security surrounding user database – strong

When you consider that kind of layered information in the context of all risks you currently manage, it represents a herculean task. It also leaves out the threat side of the equation. The result – you can’t show how risk mitigation activity at control level does or does not improve the top level risk position.

No wonder many firms fall back on complying with minimum regulatory standards as their security benchmark. Something that will serve us less and less well in the rapidly changing cyber threat landscape.

4. The aggregation challenge

Aggregation of risks depends entirely on how and how consistently they are calculated at the lowest level. Low, Medium and High. Remote, Likely and Probable. Green, Amber and Red. Ad Hoc, Repeatable, Mature. None of these things scale. Even numerical models based on estimated Annual Loss Expectancy, have a degree of subjectivity that often makes a nonsense of aggregated numbers. So all is lost right? No. The approach I alluded to above offers light at the end of this tunnel;

A new approach to risk

Assessing security related risks is identical to assessing any insurance risk. Car insurers depend on massive stores of data on type, magnitude and frequency of all the events that can contribute to a given incident and the resulting costs. They build complex actuarial models. Mapping the interaction of all of these factors. When you input your personal characteristics, driving history and car details, out pops your estimated likelihood and cost of an accident and the premium you owe to insure yourself against it. This is what our industry needs.

Cyber insurers can’t help you with this. They too are casting around for the inputs to be able to model cyber risks. Even as inputs start to become available, they are unsure about interactions between various risk factors and how to integrate that into the bigger risk picture.

To quote this recent Insurance Age article by Ida Axling;

“There seems to be a level of insecurity among brokers when it comes to cyber insurance. When I was speaking to people in the market, questions like “what does it actually mean?” and “to what extent are companies exposed to cyber risk?” came up.”

“It is a bit of a vicious circle. Insurers are not sure of what risks they need to cover, which in turn makes brokers unsure of what exactly they are selling”

I know of only one firm working on this and having any substantial success. The results are fascinating. They are using advanced statistical analysis to model current threat and aggregate global incident data (statutory notification and improved incident investigation by regulators has made this possible).

Guess what – risks you think are headline priorities are not looking so big. There’s also a strong correlation emerging between sensitivity/quantity of data, employee numbers and financial impact of incidents. Not sexy, but sound and incredibly useful to finally put some shape around real cyber risks.

What does an accurately modeled cybersecurity risk future look like?

Not only does an actuarial model offer some clarity about what everyone needs to worry about most, it can offer a personalized version for your firm:

  • Should you spend on a threat intelligence solution or better internal data governance processes? Which, per dollar,has the most or most immediate impact on the local level of cybersecurity risk?
  • What should you do with outputs of your vulnerability management tool? Which highlighted issues are and are not a priority to fix based on industry context and the potential cost of local exposure?
  • How likely is it a device with sensitive data on it will get stolen? What, in an industry and local context, is the real risk and costed justification for spend on mitigation? Even more specifically, what is that risk on a per division or department basis?
  • You are drowning under regulatory security requirements and can’t assess everything. Which services or systems carry the most inherent risk and are, according to locally adjusted industry patterns, most targeted?
  • What is the cumulative impact of sensitive data theft? How do you put together and scale financial, operational, reputational and regulatory fallout?
  • After purchasing your security solution or patching x number vulnerabilities, how much has your local risk reduced?

These questions can all be answered with dramatically improved accuracy with this kind of deep and broad cybersecurity risk model.

It’s incredibly exciting for risk professionals (as an experienced cybersecurity professional, I was beginning to doubt there was an answer to our risk management problems), but it is not exciting investors. Why? Because it doesn’t have the FUD fueled marketing power of an animated cyber threat map, or something labeled IoT. A travesty because those tools, without sound risk assessment, provide low to no value-add.

I’m not suggesting people bin current risk tools. We will always need a means to tame information gathered, but we shouldn’t settle for poor quality data. Market leading risk applications are mainly just databases with nice interfaces, built-in calculations and nifty reports. In other words, if garbage goes in, garbage will come out. It’s time to invest in reality. Finally give your business a fighting chance to make and keep making cost-effective decisions about cybersecurity.

 

Sarah_ClarkeAbout the Author: Sarah Clarke is a security Governance, Risk and Compliance specialist with 14 years hands on experience in IT and InfoSec. After a business degree she took a stop-gap IT helpdesk job and never looked back. Along the way gaining invaluable experience in desktop engineering, network management, network security, compliance management, change and vendor security assurance and enterprise security risk management.

She is passionate about bringing clarity and common sense to the industry, Both her award nominated blog and articles for trade magazines are popular because they do exactly that. Moving on from a number of years working in financial services, she now owns Infospectives security GRC consultancy (specializing in embedding risk into security audit and assurance activity), serves as a founding advisory board member for the GiveADay charity initiative and is a regular contributor to and staunch supporter of The Analogies Project.

Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.

Hacking Point of Sale
  • Khürt

    "I know of only one firm working on this and having any substantial success. The results are fascinating. "

    I would be fascinated to know what firm that is.

  • The name is conspicuously missing for IP protection reasons, but they are going to publish some details soon. I'll be sharing that when it happens. For avoidance of any doubt, I have no monetary interest in the solution, just hope that this is what it looks like – a stepchange in the quality of data we can bring to the boardroom when discussing security risks and priorities for mitigation. Sarah Clarke

  • Tom

    How about an update? When is this product coming to market?

  • Check out vivosecurity.com and what they're up to. I fully acknowledge I don't have universal knowledge of firms researching cybersecurity risk out there, but this is a solution driven pretty purely by real information, seriously advanced statistical modelling and rational allowance for bias (in statistical modelling and cybersecurity terms). It is not a security firm starting from a set of product validating assumptions.