Skip to content ↓ | Skip to navigation ↓

I’m a cancer survivor, and it strikes me that cancer and IT security breaches have something in common: early detection is crucial.

You see, 11 years ago, I caught my cancer (malignant melanoma) fairly early, it was treated quickly, and I’ve had no recurrence since then. This was possible because a) my wife noticed something suspicious and, b) I had it examined by a medical professional who knew what to look for.

And even with my (somewhat) early detection, the treatment was worse than I’d have liked – instead of removing only the cancerous spot, mine had spread a bit so the had to take a 1-inch clear margin all the way around the cancer (that’s 2 inches – about 5cm – across). A couple of weeks earlier and that might not have been necessary. Now, I know what to look for and am vigilant, so I hope to catch it early if I ever get it again. [you can find out more about detecting skin cancer at my personal blog]

With IT breaches, time is also of the essence. The longer you’re breached without realizing it, the more potential damage or loss your organization will experience.

Flying blindseenoevil.jpg

If you look at recent studies, organizations are – as a rule – not very good at detecting breaches:

  • Breaches go undiscovered and uncontained for weeks or months in 75 % of cases. — Verizon Business, 2009
  • Average time between a breach and the detection of it: 156 days [5.2 months] — HelpNet Security, Feb 2010
  • ““…breaches targeting stored data averaged 686 days [of exposure]” — Trustwave, 2010

That’s a reeeallly long time, no matter which figure you cite!

The challenge is shortening the breach-to-discovery gap. And you want to be the one who discovers it – not like a company I know of who found out their customer’s information was for sale on a hacker site when the FBI showed up at their door to tell them. [and no, I won’t tell you who it is]

What’s the problem?

There are lots of issues that contribute to organizations having such a hard time knowing they’ve been breached. Here are some of them:

  • Too many cooks in the kitchen
    This is when lots of people have access to too many sensitive or critical systems. You need to really get a handle on this if you have this problem, and focus on “least privilege” principles. If lots of people are adjusting systems, it’s hard to spot suspicious adjustments.
  • Fuzzy roles
    This is a particular problem in small IT shops or, as I like to call them, “multiple hat” shops where one person may cover lots of different job functions. This might mean that there is little in the way of oversight / checks & balances that might detect inappropriate or accidental actions by a trusted individual.

    • Unclear policies
      One of the biggest problems is when policies a) aren’t defined, and/or b)aren’t consistently communicated, and/or c) aren’t understood by the people who are supposed to follow them. Documentation, consistent communication, and education are critical.

      • Lack of controls
        It’s one thing to be able to tell people what rules they should follow, it’s quite another to have the means to verify they are not breaking those rules. You need controls (processes, technology, oversight, independent verification, audit trails, and things like that) to allow you to evaluate what’s actually happen in comparison to what was supposed to happen. Automated controls are better than manual – people don’t pay attention consistently enough.

        • Lack of accountability
          This is one of the biggies. This can occur when there is no “tone at the top” that creates measurable consequences when someone breaks the rules. If nothing happens when someone violates a policy, what good is that policy?

          And I could add to this list for the next hour…

          All of these issues are compounded by the overwhelming volume of information we’re tracking with security products, and the various disconnected “silos” of information and activity – the network team, the security team, the virtualization team, the firewall team, the ops team, the apps teams, the OS team, and so on.

          It’s no wonder it takes most organizations multiple quarters to notice there’s a problem.

          There is a way to fix this

          The key is to get visibility into all the activities, events, and changes that relate to critical systems and assets; perform intelligent analysis of what’s happening (in relation to what’s expected or what’s specified by policy) so you can focus the organization on the exceptions; then apply automation so that this analysis of the haystacks of data happens automatically, all the time.

          Ultimately, you want to focus the organization’s attention on “events of interest” that result in “changes of interest” – aka “suspicious activities that lead to suspicious results.”

          This approach enables you to manage by exception – not by fire drill. If you want to get there, you can start by reading the IT Process Institute’s guide, “Security Visible Ops” – coauthored by my friend Gene Kim – or listen to Gene’s introductory webcast on this topic.