Skip to content ↓ | Skip to navigation ↓

I recently came across a great Wall Street Journal article titled, “Why We Lie,” by Dan Ariely of Duke University.  He has a forthcoming book that I am now eager to read: “The (Honest) Truth About Dishonesty: How We Lie to Everyone – Especially Ourselves.”  In the WSJ article, Mr. Ariely reminds us that honesty is not binary – people are not honest or dishonest.  Instead, there are outliers at the ends of the dishonesty spectrum, where a small percentage of us are always dishonest (1%) and an equally small percent of us are always honest (1%).  That leaves the rest of us in the vast grey area of what I’m interpreting as opportunistically dishonest (98%).  For all intents and purposes, the suggestion is that nearly everyone has the capacity for dishonesty in the right context.

The studies Mr. Ariely and his colleagues have carried out are interesting.  I won’t recap the studies in their entirety.  Instead, I recommend you read the article. In one study, participants were given a mental exercise and were monetarily compensated for each correct result, which was intended to show not only when and how people might be dishonest, but whether compensation affects the decision to be dishonest.  A variant of that same study added the element of the participants knowing that someone in the group was cheating – a situation designed to determine whether that circumstance would affect dishonesty decisions.  Yet another variant of the study added an element of accountability where the participants clearly knew that the likelihood of being caught cheating was increased.  Here are some conclusions from that set of studies:

  • Most people cheat when given the opportunity, but only by a little
  • More money does not lead to more cheating – a bigger return is not necessarily a motivator for dishonesty.
  • Seeing others cheat increases cheating overall
  • Higher probability of getting caught is not an effective deterrent to dishonesty.

One more variant of the study was introduced, where the element of morality was added.  In one instance of this variant, all participants were reminded of the Ten Commandments before the exercise.  How many people cheated?  None.  In another instance of this variant, all participants were reminded of the school’s honor code (the subjects were college students).  How many people cheated?  None.  In yet another instance, a group of atheists were reminded of the Ten Commandments and even swore an oath on the Bible before the exercise.  How many people cheated?  None.  These results are remarkable, though they will likely be confirmation of suspected truths (Bruce Schneier, for example, has routinely suggested that our security focus is too narrow and that we should consider morality as a pressure point).

Throughout their experiments they lost hundreds of dollars to the “big cheaters” (those who would always cheat), but they lost thousands of dollars to those who cheat just a little, which suggests that it’s more important to pay attention to the small forms of dishonesty. These studies were conducted in a very general context, but I believe they can serve to inform your organization well.

Every day in our organizations people have opportunities to be dishonest – almost certainly with even more complicated motivating factors than college students. I believe the bottom line, though, is that dishonesty in the context of your organization entails individuals knowingly violating organizational policy, process or procedure. Sometimes a violation is well-intentioned (we have to meet a deadline), sometimes it’s subversive (this process is poorly designed and inefficient). In effect, all organizations must contend with the risk that their employees will have opportunities to be dishonest – or cheat in some way – throughout the day.

From the article:

Does the prospect of heavy fines or increased enforcement really make someone less likely to cheat on their taxes, to fill out a fraudulent insurance claim, to recommend a bum investment or to steal from his or her company? It may have a small effect on our behavior, but it is probably going to be of little consequence when it comes up against the brute psychological force of “I’m only fudging a little” or “Everyone does it” or “It’s for a greater good.”

From the perspective of risk management, what do we have at our disposal to effect the levels of “cheating?”  From the quote above, we know that fines and increased enforcement are generally ineffective, and from the studies we know that reminders of morality are effective.  Then, the question seems to be: How do we remind employees of their moral obligation to abide by organizational policies in an effective way?

In another study, they modified insurance forms by placing the attestation and signature block at the top of the form.  Recipients of this modified form (20,000 of them) would read the attestation (you know, the statement that all the things you state on the form are true to the best of your knowledge and so on) and sign the form before filling it out.  While no exact conclusion can be drawn from the fact that the control group reported an average of about 2,400 fewer miles driven, the results do suggest that the availability of a moral reminder before form completion led to a decrease in dishonesty.

Then, perhaps part of the solution is to add morality features to the tools we use.  In fact, all else held equal, I would argue that the tools we use to carry out our daily responsibilities may be a critical part of the solution in two ways.  First, tools may provide the opportunity to provide effective reminders to their users of their obligation to adhere to organizational policy.  Of course, we all know that these measures may simply become a click-through nuisance (what do most users do when they see a digital certificate warning pop up in their browser?), or, perhaps worse, something akin to an Orwellian state of constant moral reminders.  Still, I think that for the most critical points of opportunistic dishonesty a stark reminder of moral obligations could be effective.

Another way tools can help is by recognizing that they are an instrument used in the context of a business process and may therefore be able to reduce the opportunity for dishonesty in the first place.  Reducing the opportunity for dishonesty will require tool designers and architects to understand the processes in which the tools are used and to identify the points of human interaction within the business process, which is where opportunities for dishonesty exist.  Once these points of opportunistic dishonesty are identified options for segregation of duties can be implemented, such that administrators can configure the tool to at least force collusion to be dishonest.  Such tooling will, of course, be more or less successful depending on context.

No matter what the solution is, it seems important to recognize three things:

  1. We all have the capacity to be dishonest (including organizations)
  2. When we are ourselves dishonest, we inspire others to be dishonest
  3. Expanding the risk management tools at our disposal to account for the morality factor should be considered as part of the solution to mitigating dishonesty

In short, we would do well to recognize that the controls we use to manage risk should take into account “the human factor” (see Shawna’s summer reading list).  We should, therefore, seek to understand our own nuances more completely.  The studies Mr. Ariely and his colleagues have performed are valuable in that they shed more light on the wonderfully complex thing we call human nature.