Skip to content ↓ | Skip to navigation ↓

In my last post (Risk Management and Bias: Is Human Nature In Your Way?), I related some of the parallels I found in an article from a medical journal, written in 1998.  As promised, now I want to discuss some of the things I learned from that article about communicating risk.

Skip the FUD

Traditionally, security has done a lot of fear-based selling.  I believe (hope?) those days are largely coming to an end.  The article (“Communication and Interpretation of Risk,” by J. Richard Eiser) cites study data that indicates:

“…messages that arouse high levels of fear are less effective in changing attitudes and behaviour than those that arouse only moderate fear.”

Amen to that. In business, the more we can deal with facts and measurable impact, the better.  If you relate the facts and their impact to those things executives care about, they may create their own fear but (please) leave that up to them and don’t try to whip them into a frenzy.

Numbers and words matter, but how much they matter depends…

Some people like numbers, some like words and stories, others like graphs and charts – we know that, and it’s important to pick the right “medium” for your target audience.  Regardless of your choice, be aware of the baggage associated with the means of communication.

“The research on cognitive heuristics suggests that people are error-prone in their statistical reasoning. Nonetheless, if statistical information is available, this seems to be the kind of information recipients prefer. A possible danger is that this preference may extend to cases where precise statistics are not available, and numerical estimates may be interpreted as more exact than they actually are. On the other hand, there are also dangers in relying on more ambiguous verbal expressions. An obvious one is that a risk that is considered ‘extremely unlikely’ by one clinician may not be so regarded by another and still less so by the patient even when the numerical probability is known and understood.”

When there is a possibility of “over-spinning” the amount of risk, I find it helps to combine techniques – present a distillation of the data along with other data to provide context, but also add a story that answers the inevitable, “So what?” question that is on everyone’s mind.  The story should include some example of how the issue will impact the success of your organization.

Also, don’t underestimate the tendency that people hear what they want to hear, and the way you present numbers can influence the decision that’s made. The paper states:

“Research on decision framing stems from the principle, in prospect 13 theory , that ‘prospects’ (alternative possibilities) are not evaluated as absolute end-states (i.e. final balance-sheets), as classic economic models would predict, but as changes from some actual or expected comparison standard. In other words, what matters for preferences is not whether an outcome will be ‘good’ or ‘bad’, but whether it will be ‘better’ or ‘worse’ than some standard. For risk communication, this means that the same information could be interpreted positively or negatively depending on the recipient’s prior expectations or some other implicit standard.”

Here is an example I once heard that illustrates this point:

  • “We have 10,000 lives at risk.  I have a solution, but it will result in the deaths of 500 people and everyone else will live.  Do you want to pursue it?”
  • “We have 10,000 lives at risk.  I have a solution that will save 95% of them, guaranteed.  Do you want to pursue it?”

Same outcome, but which one sounds more palatable?

Risk is risk, and people are people

For me, the bottom line is that it pays to spend more time learning how people interpret risk, how they personalize it, and how they factor risk into their decisions.  There is a lot to learn from non-infosec fields, since human nature knows no single technical domain.

I find it also helps to do a “post-event evaluation” on how you’ve communicated risk so you can improve.  Find a productive way to interview or poll your audience to find out how your message came across, what really stuck, how it impacted their decision, and things like that.  You can also use a “stop, start, continue” model to figure out how to improve in the future.

I hope this stirred some thinking – I’ll be sharing other models as I discover them.  If you’ve found your own, I’d love to know about them.


Related Articles:


P.S. Have you met John Powers, supernatural CISO?