In a recent survey on the state of risk based security management, we asked leading information security and risk management professionals in the US and UK a simple question:
Is information security risk management an art or a science?
In answering this question, we asked them to consider “art” as being analysis and decision-making that is based on intuition, expertise and a holistic view of the organization, where “science” refers specifically to risk analysis and decision making based on objective, quantitative measures.
The results of the survey are shown here:
What is your informed opinion? Is information security risk management more of an art or is it purely a science?
Click here to take survey
As you might expect, the survey reveals that risk management is not considered purely art, and it’s not considered to be purely science either.
There are many models and key performance indicators available that allow us to apply scientific elements to infosec risk management, but those activities do not alone make it a science.
On the other hand, there will always be a need for risk management and security professionals to leverage their own experience and knowledge to make determinations that are based on their own background and particular circumstances – the “art” part of the equation.
So we decided to ask some risk management “heavy hitters” for their perspective on this topic. Here are some important take-aways from professionals who are constantly striving to improve the practice of risk management:
Infosec Risk Management is Neither an Art nor a Science
Risk management requires a balance of both art and science. Jack Jones, author and creator of the Factor Analysis of Information Risk (FAIR) framework and a CISO himself, points out that when we gather data and use models (the “science” part), we tend to make subjective interpretations based on our personal biases and perspectives (the “art” part).
A good way of representing how a balance is achieved is demonstrated in this analogy shared by Jones:
If you’re approaching an intersection in your car and the light turns yellow, do you choose to stop or to speed up to beat the red light? It depends, of course. In the instant that you see the light turn yellow, you take into consideration objective data such as your speed, how close you are to the approaching intersection, the road conditions, the condition of your vehicle, traffic, and whether there’s a police officer on the corner, etc. You then add your understanding of their relationships to one another (your mental model) to generate an understanding of the risk. Then you make a very personal and subjective choice as to whether to stop or accelerate.
In risk management, we get the best data we can, apply it against whatever model we’ve chosen to use for evaluating risk (i.e. FAIR), and then an executive or other decision-maker interprets the relevance of our analysis within the context of their organization’s priorities.
Art in this debate represents the purely visceral approach of the experienced professional, intuitively sculpting truth and beauty where none may have existed. Science in this debate represents the unemotional and objective approach of an anaytical mind: Getting something “down to a science” implies deep understanding and repeatability.
Nothing could be further from reality since those two extremes do not exist in the real world, noted Jay Jacobs, a senior data analyst on the Verizon RISK team and a co-author of the Verizon Data Breach Investigations Report.
Jacobs sees virtually no line between the art and the science. The main difference is that artists are infallible in that art is an expression of the opinion of the artist, whereas scientists begin with the assumption that our opinion is fallible (especially when dealing with complexity) and so they seek feedback and evidence as the basis for improving our understanding of an event.
The scientific approach typically wins out precisely because we are dealing with complex systems, and because we need to reduce our level of uncertainty.
All science has a degree of art to it, and all art has a degree of science as well, says Ben Tomhave, a board member for the Society of Information Risk Analysis.
Tomhave believes there will always be an inherently artistic attribute to risk management.
For example, science often flourishes by adding an artistic flair even if we’re simply building stunning visualizations from a big pile of tasty data, and we should always ensure that we have the best data and analysis available for making the best decisions possible (“commercially reasonable and legally defensible” that is).
“I think my co-blogger and co-presenter Alex Hutton said it best, when he called risk a psuedo-science,” said David Mortman. “That is to say, we may be beginning something significant, but its not codified into a set of known systems that work.”
You can see a copy of their presentation that was utilized at BlackHat and BSides Las Vegas.
Bob Rudis, director of enterprise security and IT risk management for a global financial services organization, views infosec risk management as being more “alchemy” than as pure art or pure science.
The term alchemy was used in reference to Hutton’s and Mortman’s position, having posited that the current state of infosec risk management is that of a “proto-science” [editor’s note: modified from ‘pseud0-science’ to ‘proto-science’ for accuracy].
Rudis acknowledges that the majority of information risk practices he comes across seem to best resemble mythical incantations recited over a cauldron of esoteric formulas and components, or at worst a divining rod approach.
Infosec Risk Management Requires Mastery
In the book Outliers: The Story of Success, Malcolm Gladwell sets out to identify the shared traits of highly successful people. One common theme in the book is the “10,000-Hour Rule,” in which he asserts that in order to be great at what you do, you need to invest an enormous amount of time so that you can become “unconsciously competent.”
This notion is not foreign to Hutton, director of operations and technology risk for a financial institution, who encourages us to embrace the Japanese concept of “shokunin” which has a similar meaning as the word “craftsmanship.”
The Japanese apprentice is taught that shokunin means not only having technical skills, but that it also implies a particular attitude and social consciousness, and a “mastery” of ones profession.
“So for me, I very much see an evidence-based perspective, quantitative modeling techniques, and the application of scientific method as the means to evolve and master my profession, as well as help me lead the optimal risk team,” Hutton said.
Show Me The People!
Donn Parker, who has written many books on computer security and whose thoughts on risk assessment are well known to those in the industry, believes that information security is really a trade and professional practice, and that ‘risky’ situations should be decided by a management fiat, not by science.
He sees risk-based security as a way to introduce uncertainty and forecast future events with little or no valid known experience.
”It is unnecessary, wrong and dangerous to attempt to chose, justify and prioritize effective security solutions by forecasting the frequency and impact of adversity outcomes when dealing with unknown intelligent adversaries and unknown vulnerabilities in rapidly changing and complex IT,” Parker pointed out.
Parker’s contributions to this industry are many, and often he presents seemingly contrarian views. A few years ago I remember attending a very spirited discussion on risk management with David Mortman, Alex Hutton, Andy Ellis and Allison Miller where the panelists shared their views on what risk management practices are more effective for security professionals.
“Much of the security of an enterprise depends on well-known generally accepted controls and practices and a common body of knowledge,” said Parker.
Risk management is draws from your personal experiences, and as Tripwire CTO Dwayne Melancon puts it, “It’s what you feel in your gut.”
Melancon sees risk management as a way to apply science to an emotional problem: How much risk can be tolerated in a given situation, and what be done about the areas where the risk has become unacceptable.
However, it could be argued that many organizations do not possess a clear understanding of their own risk tolerance. Thus although one can agree with Melancon’s statement that risk is based on what you “feel”, so there needs to be more science in the process to allow individuals to assess situations with less personal bias and make decisions that are consistent with the organization’s overall appetite for risk.
Melancon also believes that risk management has some of the attributes of a science in that it enables us to explicitly declare the factors we will evaluate along with the measures for scoring those criteria. We can then come up with more objective answers that are consistent with what we feel in our gut, as well as enabling the comparative ranking of different risks that are based on similar factors.
“Risk management, or more generally management, is fundamentally an art, said Dan Arista, a researcher focusing on Cybersecurity Risk Management.
That said, wherever we talk about risk, or where “future outcomes” matter, we want our predictions to be guided by evidence.
The best kind is ‘scientific’ evidence, which requires the ability to experiment, observe and measure phenomena, isolate constants and variables, which may need to be controlled.
However, we are plagued with a lack of known anchoring constants, difficulty in reproducing field conditions in controlled experiments, and the overwhelming scale and complexity of cyberspace.
The result is that risk analysts have had little success developing a cohesive set of theorems corresponding to the phenomena under examination. It’s difficult to make valid inferences about what will happen if a particular configuration is adopted. This imprecision in predictability frustrates efforts at making it a science.
“We still don’t have a ‘theory’ of cybersecurity,” said Arista. Even if we had good predictive models to employ, answering the over arching question of how much risk is ‘reasonable’ remains an open issue.
“When the down-side of risk can be externalized, the costs of security may not be economically rational. As long as we’re deploying IT, there will be some risk to take. All the focus has been on predictive models, the ‘normative’ models have been neglected”, said Arista.
The majority of what managers are burdened with is the exercise of discretion – particularly because there are still many trade-offs, which elude quantification when strategizing on how to defend against an ever-changing sentient threat.
We’re Closer to Art and Need More Science
Jones insists that risk management has been severely skewed toward art, and that we need to focus more on science to bring back the balance, while Tomhave doesn’t necessarily believe that we’re closer to art so much as we function with an elementary understanding of data, statistics, and analysis.
In order to mature, we need to develop and evolve our understanding of statistics, statistical modeling, data sampling, data gathering, and more. All should be required courses in higher education.
Tomhave also points out that it’s important to figure out how to condense and fast track the science part of the equation (decision sciences, ops research and overall statistical methods) to provide a broader, more immediate benefit to people and industries.
Rudis shared that we need to continue to pursue greater visibility into IT management practices, and a greater commitment to aligning threats and vulnerabilities to actual business losses with repeatable processes.
Progress is being made towards infosec risk management becoming a science, but there is more than a sliver of artisanship involved in the more mature risk disciplines that we could benefit from.
Jacobs admits that risk management is still more of an art, and that we need more science in order to grow and mature as an industry. Science brings with it a level of rigor and feedback that is sorely missing in most infosec risk management programs.
The shift towards science doesn’t have to be difficult or complicated since it is a mindset above anything else.
More importantly, Jacobs believes we need to pursue risk analysis as a science so that the next generation of analysts won’t also be squinting one eye at complexity and pronouncing “medium risk”.
Although it’s unlikely that we’ll ever fully automate decisions that rely on data and analysis, Tomhave believes that we’re finally learning how to collect good quality data, learning how to apply statistical methods to analyzing it, and that we’re learning how to visually present the derived findings.
He believes we should seek to constantly improve science, but we should trust that we will also get better at making decisions through intuition and natural assimilation of the facts.
Risk is on its way to becoming a science, but it has a long way to go, according to Mortman. In order to get there, we need to find those systems that work, and this means building models, testing them, and then refining them. This will take literally thousands of iterations (and probably a lot more), but eventually we’ll get to a science.
Advice and Tips
Jacobs encourages us to try this simple exercise with your next risk analysis: Start with the conclusion and think up an opposite situation (think of high risk as low), and then think of things you could observe, measure or collect that would disprove the new statement (a null hypothesis).
This exercise will result in one of two outcomes: 1) Science happens, or 2) We can’t prove or disprove our initial statement with evidence, and we should wonder how much confidence we should have in our initial conclusion. If we are paying attention, the latter outcome should lead to many more questions.
In conclusion, different organizations practice infosec risk management in different ways depending on their level of maturity and expertise.
As Tim Erlin, Director of Product Management at Tripwire mentions, the important question to ask is which approach is more effective – and for that we need good metrics to actually measure success.
That, is a great idea for another post… How do you measure if you’re being successful in your infosec risk management efforts? If you would like to contribute, comment below and ping me.
P.S. Have you met John Powers, supernatural CISO?
Images courtesy of ShutterStock