In a number of recent posts, Risk Management has been a hot topic. I’d hate to leave you with the impression that Risk Management is somehow a panacea for all security programs and problems. To address that, here’s a post dedicated to a specific wart on the complexion of risk management. Many people are data driven and want to know that their conclusions are provable all the way down the stack. Their complaint around existing risk management frameworks is that they don’t actually have a lot of data proving the values they provide.
In other industries, such as insurance, they can refine risk measurements based on available data – these are what are known as actuarial tables. If, over time, the predicted numbers don’t match year over year, a set of calculations can be applied and the risk assessment is “auto-magically” adjusted. This set of measured and statistical data and its relationships (age correlated to gender, sickness, mortality and cause) is what makes possible stories such as things that identify that the next generation is likely to live a shorter lifespan than previous generations. In turn those tables drive other decisions, such as what it costs to get health insurance as you age, or car insurance if you are a 21 year old male.
Because as a community we don’t share security information well, we can’t build good statistics around how often something is really true in the population. This means, we can’t create the equivalent of an actuarial table – we can’t set a base rate. The only data we have on how often something shows up in the overall population is based on honeypots and / or submissions to vendors. Since we don’t know how often something really happens in the environment, anything we’re doing is based on expert advice trying to provide common sense. This has impacts on the tools we use in security, such as Intrusion Detection Systems (IDS); as well as the processes by which we then rank how risky any given finding is – which is often the basis for our risk methods.
Does this mean that risk management is bunk? Absolutely not. It’s still the best method we have for identifying which of the various needs will be funded in a prioritized fashion. In addition, the process of evaluating whether people agree on the scores of any given thing is foundational to building consensus around what an organization values and will respond to in their environment. Those conversations aren’t measurable, but their impact to the security posture of any organization is priceless.
Does it matter which risk management methodology you use? Sort of. There are ways of expressing how you measure uncertainty; and that can have a huge impact on the team understanding of how “certain” the statistical analysis is. Calibrated Probability training can improve the ability of experts to assess odds, which can then be applied to the risk methodology of choice. Monte Carlo simulation is the gold standard of PMI PMPs because studies show that it consistently outperforms project scientists.
Can anyone improve the status quo? Absolutely. Participation in any of the standards to help improve our information exchanges is vital. There is a LOT of activity right now in this space, here’s just a sampling of places: Incident Object Description and Exchange Format (IODEF), Managed Incident Lightweight Exchange (MILE), Malware Metadata Exchange Format Working Group (MMDEF), Malware Attribute Enumeration and Characterization (MAEC), Cyber Observable eXpression (CybOX) Common Attack Pattern Enumeration and Characterization (CAPEC), and CVSS version 3.
As a data consumer, encourage your security vendors to provide full data in any report they offer. Two existing examples of this are the Tripwire “True Cost of Compliance” and the recent Ponemon Institute 2011 Cost of a Data Breach Study. Another way is to participate in the Society of Information Risk Analysts (SIRA). If you’d like to meet people who think about this kind of stuff, and love to travel; another option is to participate in the first ever SIRA Conference.
If you have other ideas how we can collectively solve this problem, I’d love to hear them. Feel free to drop a twitter or comment!