Skip to content ↓ | Skip to navigation ↓

I was recently watching to an interview with the Target CEO about their data breach, in which he mentioned they are “accountable and responsible,” and seeking to learn from and apply the learnings from their investigation. He also mentioned that there was malware on their POS systems.

When you hear about data breaches in the news, the focus is typically on what happened before – how the breach occurred, what they took, and whose data was stolen. However, there is another side to data breaches that’s just as ugly (and just as real):  How quickly can a company determine what parts of its infrastructure can be trusted after a breach?

Some of the most common questions I hear after a breach are:

  • “Which systems can we trust?”
  • “What was done to compromise our systems or data?”
  • “How quickly can I figure out where we stand?”

I’d like to take you through a “day in the life” of what it takes to get to the bottom of the situation and determine which parts of your infrastructure are trustworthy so you can begin the long journey to restoring trust in your business – not just your systems.

Step 1: Figure out what you have, whether it’s important, and what it’s worth to you

How do you know what you’ve lost until you know what you have?  From a systems perspective, this means taking an inventory of your systems and figuring out what’s there vs. what should be there.  This begins with an inventory of the systems and applications in your environment – and the more automated, the better.  After all, time is of the essence after a breach.

This process gets easier if you’ve done past inventories, since you can compare what you see now to what you’ve seen in the past and scrutinize the differences.  Short of that, I’m afraid this will be a very manual process – you’ll have to manually review each system, figure out if it should be there, look for systems that are missing, and begin to group them in some way that allows you to divide them into manageable groups.

Most often, using a vulnerability management product (such as Tripwire’s IP360) is the easiest way to do this since they can discover and profile the systems in your environment.  But don’t forget about partners and third-party infrastructure that’s involved in your procedures – those will need to be accounted for, too (and let’s hope the terms of your outsourcing contracts don’t get in the way of getting what you need).

As part of this process, begin segmenting your systems and applications into logical groups to indicate how important they are to your business.  Some systems could be more important to you for a number of reasons, such as:

  • They store or process sensitive information such as personal health information, financial information, credit card numbers, and so forth;
  • They are subject to a lot of scrutiny by regulators or auditors; or
  • They are critical to the success of your business – think in terms of how they impact revenue, profit, reputation, customer loss, etc.

Figure out what you most valuable assets are so you can focus there first, then understand the relative value of your other infrastructure so you can approach things in an orderly progression based on business impact and value.

Step 2: Define what “good” looks like and use that to find “bad” things

The next step is to build a reference point so you can evaluate what you have against a “known good” reference point.  This is critical not only in determining which systems have been tampered with, but to determine what’s been done to compromised systems.

A reliable way to get this done is often to build some new infrastructure by following your internal “cookbook” and standards, then use software (such as Tripwire Enterprise) to create a snapshot of the new infrastructure so you can use it as a baseline for comparison.  Then, you can compare your production systems to your reference baselines to determine what the differences are (again, Tripwire Enterprise can do this).

In this process, automation is your friend.  Otherwise it’s like one of those “what are the differences between these two pictures?” games you played in the doctor’s office as a kid, but with a lot more pressure.

This process will enable you to create 2 basic categories of systems and applications:

  • Systems and applications you can trust because they match your “clean” baselines;
  • Systems and applications you can’t trust because they have obvious signs of tampering, or enough suspicious differences that you want to analyze them further

Step 3: Remove and replace suspect systems

Now that you know which systems can’t be trusted, take them out of the environment as quickly as possible – they are cancer to the rest of your production environment, and will continue to harm your business.

Of course, you’ll need to replace them with systems you can trust as quickly as possible, but there are some issues with that:

  • If you just create another copy, what’s to stop them from being compromised in the same way?
  • If you provision new software over the top of your compromised systems, you’ll wipe away any evidence that could tell you what was done in the first place.

So, what’s the answer?  Unfortunately, you’ll have to keep those “bad” systems around for a while – just make sure they stay off your production network so you can contain the damage they could cause.

The good news is you can use the tainted systems for analysis to determine what was done to compromise them in the first place, so you can select ways to “harden” the new systems your building to make them less vulnerable to the same attacks.  This may mean configuration changes, changes in security controls, or changes to your processes to limit the attackers’ freedom in the future.

For a while, this may feel like a game of “whack a mole” if the attackers begin to compromise your new systems, but you can get there with persistence and a bit of luck.

In spite of the frustration, this is a tried and true method, and I’ve personally been involved in applying it to help breach victims for over 10 years, so I know it works.

What’s the alternative?

This process sounds painful, doesn’t it?  You might be thinking, “Surely, there’s got to be an easier way…”  And you’re right – there is an easier way, but it doesn’t come for free.

Most of it comes down to planning ahead, restricting access to production infrastructure, controlling changes, and building a repeatable process to identify variance in your environment early.

For example, you’ll gain a lot of value by creating automated processes to create and update systems and applications in your environment.  Lately, initiatives like DevOps have become popular, as they provide a more agile way to build infrastructure than older, heavier processes like ITIL.

Continuous monitoring (also known as “Continuous Diagnostics and Mitigation”) is another huge leverage point, which is one of the reasons it’s being mandated for Federal agencies in the US.  By continuously detecting changes in your environment, evaluating them constantly against the “good” standards I mentioned earlier, and calling attention to them based on business impact, you can greatly reduce your window of exposure.  After all, if you detect suspicious variance early, it’s much easier to deal with it before it results in catastrophic loss.

Continuous Diagnostics and Monitoring will involve not only Tripwire Enterprise and Tripwire IP360, which we heard about above, but will also involved monitoring for suspicious network traffic patterns, strange user activity, etc. using a log intelligence solution such as Tripwire Log Center.

Hopefully, this gives you a glimpse into the massive amount of work that goes on behind the scenes when a data breach occurs, as well as giving you some insight into what do to about it ahead of time to make life easier in the event of a data compromise.

Related Articles:



Achieving Trust After a Breach: “Which Systems Can I Trust?”

picWhen an organization has been compromised, some of the first questions to ask are critical: Which systems can I trust? What was done to compromise my systems or data? How quickly can I figure out where I stand? Tripwire is often contacted after the fact to help answer these all-to-common questions.This paper explains the five steps an organization can take, even if they haven’t previously installed Tripwire Enterprise, to restore trust in their enterprise following a breach.



The Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


picDefinitive Guide to Attack Surface Analytics

Also: Pre-register today for a complimentary hardcopy or e-copy of the forthcoming Definitive Guide™ to Attack Surface Analytics. You will also gain access to exclusive, unpublished content as it becomes available.