The 2017 Ponemon Institute Cost of a Data Breach Study
found that “the cost of a data breach is going down, but the size of a data breach is going up.” Additional key findings included the following:
- The average total cost of a data breach decreased from $4.00 to $3.62 million.
- The average cost for each lost or stolen record containing sensitive and confidential information also decreased from $158 in 2016 to $141. (The strong USD played a role in reducing the costs.)
- The average size of the data breaches investigated in the research increased 1.8 percent.
Okay, so what does all that mean? Good news? Bad news? Mixed news? Well, we think it is incomplete news, and here are two main reasons why:
- The usual limitations of these types of studies – Ponemon and IBM Security do absolutely amazing work, but all studies of this type have inherent limitations. Therefore, using them as a baseline for an industry, or country even, is ill-advised.
- Studies like this are absolutely no good at predicting the future – More specifically, they tell us little about what could happen during “fat-tail” events (more on that below).
When we were approached to write on this issue, some of the largest cyber breaches came to mind: Anthem
, Target, Equifax
as starters. And yes, we found that there are some indicators out there (like stock price or recovery and incident fees) that can give you a partial picture of what the actual “cost” of the breach was.
But all of these factors to us are just a portion of what the actual costs of a breach are. And here’s the kicker: depending on the situation, these costs could make out a large portion of your total costs, or they could end up just being a fraction.
So, if that is the case, is there another way we can measure the costs of a cyber breach?
One of the best authorities on “risk” is Nassim Nicholas Taleb
. You may have heard the name from books like The Black Swan
. We use “risk” in quotations because Taleb is quite critical on how risk management is practiced today. His exact words in the prologue of Antifragile are:
"Risk management as practiced is the study of an event taking place in the future, and only some economists and other lunatics can claim—against experience—to 'measure' the future incidence of these rare events, with suckers listening to them—against experience and the track record of such claims."
Before we jump in, a quick word on the meaning of “antifragile”: according to Taleb, it differs from resilience and robustness where that the “resilient resists shocks and stays the same; the antifragile gets better.
We like the antifragile concept for two main reasons. First, when it comes to cybersecurity, what concerns people like us are these low-probability/high-impact events, sometimes called “fat-tail” events, that are difficult to account for and even harder to predict. Sure, we can say that a spear-phishing campaign could be catastrophic, but identifying which spear-phishing campaign will be the straw that broke the camel’s back is a whole lot harder if not impossible.
Second, we like the antifragile concept because it is not only about resisting the breach, but rather, it is also about learning from the breach attempt. We like that, and that’s where we would like all organizations to be when it comes to their cyber posture. (Note: we are giving you the super oversimplified version of the antifragile concept.)
So, if we want to become an “antifragile cyber organization,” where do our concerns lay? Actually, it is not so much with the technical capabilities. We see a lot of investment in the technical space, and there are organizations that are taking it a step further. By using AI
, machine learning
and threat intelligence
, these companies are doing exactly what antifragility suggests—getting better.
What is worrying us is the intangible like human interaction with and dependence on machines, human decision-making (ranging from clicking the wrong link to not patching in time), the wholesale loss of intellectual property, and the massive and increasing expenditure on cybersecurity, something which is untenable and unsustainable. In this space, we actually feel we are doing the opposite of getting better; instead, we are getting worse. We are becoming even more fragile.
That’s what makes “calculating” the cost of a cyber breach a near impossible task. It’s a future event impacted by so many variables that not only can we not give value to all these variables, but we almost certainly do not even know what all the variables are!
A network being taken down by failure versus it being taken down by terrorists or a rogue state may, in the most benign sense, have the same “operational” cost, but in terms of actual cost, it could be incredibly different.
To illustrate our point, here are just a few of the factors you have absolutely no control over:
- What else is in the news cycle that day?
- Is a social media mob going to come after you?
- How will the markets react?
- Are any people injured or killed as a result of the cyberattack?
All these unknowns play a role in determining the cost of your breach, and it would be ridiculous to suggest you can have a good estimation of those future costs that are a result of some future event that are impacted by a series of unknown and immeasurable variables.
We need to shake this thought that we have some “scientific method” on how to calculate these costs. Why? Because people are bad at calculating risk (as you may have heard us say this more than a few times), and that means “playing the odds” is not necessarily good way to go about your business if you do not understand what goes into “making the odds.”
To recap: we think we are doing a decent job on the “tech side” of the cybersecurity issue; on the “human side” of the issue, not so much. We are still prone to social engineering attacks, and we are still making the same foolish mistakes like not patching our systems in time such as critical patches being deployed within 72 hours. Until we start tackling those issues, we will continue to have some serious fragility in the system.
And with greater fragility in this complex cyber system we live in, the more vulnerable we are to one of these fat-tail events that we won’t be able to see coming yet will try to explain away with “we should have known.”
About the Authors: Paul Ferrillo is counsel in Weil’s Litigation Department, where he focuses on complex securities and business litigation, and internal investigations. He also is part of Weil’s Cybersecurity, Data Privacy & Information Management practice, where he focuses primarily on cybersecurity corporate governance issues, and assists clients with governance, disclosure, and regulatory matters relating to their cybersecurity postures and the regulatory requirements which govern them.
George Platsis has worked in the US, Canada, Asia, and Europe, as a consultant and an educator and is a current member of the SDI Cyber Team. For over 15 years, he has worked with the private, public, and non-profit sectors to address their strategic, operational, and training needs, in the fields of: business development, risk/crisis management, and cultural relations. His current professional efforts focus on human factor vulnerabilities related to cybersecurity, information security, and data security by separating the network and information risk areas.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.