Skip to content ↓ | Skip to navigation ↓

A security incident: you don’t get to plan when they occur. The call can come in the middle of the night, a holiday, or on your day off. They are highly-disruptive, high-paced, and often demand a high degree of visibility.

When management is starved for up-to-the-minute updates, any latency can be interpreted as incompetency. Every second counts. What’s most helpful are “force multipliers” serving to accelerate response, affording response teams a measure of calm and disciplined execution.

We burned almost 80 hours in four calendar days trying to find the culprit.  Sleep and meals were declared luxuries.  Nights and weekends be dammed!

As these things often go, the call came in an unscheduled manner, and required immediate response, in bold-font italics, underscored, then emphasized IRL through phone calls and face to face reiterations.

At the tip of the spear, corporate legal and HR were adroitly channeling a Federal Trade Commission’s impatience, with a lot of “send and receive, send and receive, send and receive” requesting our up-to-the-minute status, displaying their eagerness to muster the needed evidence.

Our task: Identify and mitigate an insider actively manipulating our stock values by sharing highly-proprietary information.

The FTC believed our suspect(s) were capitalizing upon dips in our stock value to profit directly from a breach in confidence, the modern equivalent of stealing from the till while burning the store down.

I wasn’t even in IT.  As far as that goes, I never really have been.  But there I was in the thick of a cyber investigation. I’d caught an insider a couple years prior, resulting in a federal conviction, and by this point I participated in a voluntary basis within a cross-functional SIRT (Security Incident Response Team) designed to accelerate response in the event the situation again occurred.

Unlike the prior major breach, the stars were aligning and things were moving at high velocity thanks to corporate alignment, as mentioned, from mahogany desks on down into the trenches.

In this, I felt great comfort.  The prior incident dragged on for weeks and then months as the insider exploited response latencies on our part to steal (and sell) ever greater quantities of proprietary data.

But this time, and quite literally:,HR pulled me from my chair and I was placed in a room and gently “encouraged” to “get very Rain Man” regarding collection of vital telemetry, and the identification of patterns within the reams of printed paper representing various forms of said telemetry.

It would be nice if “find and mitigate” were our only task, but those who know will tell you, it’s not.  Concurrently, we’re expected to do the following in response to an incident:

  • Characterize the breach
  • Immediately identify if any variety of the breach is still occurring, from this point forward
  • Given breach characteristics, identify the first time said breach occurred (historical analysis) on which assets and how broadly
  • Outline a go-forward mitigation plan

I rock.  Not figuratively, but literally.  My wife rolls her eyes and shakes her head when I do it, but for whatever reason, I tend to rock when I work on these kinds of puzzles, and this one was a real doozy.

Within the context of all that I’ve described above, our task was pretty daunting.  To place the incident in historical context, this occurred solidly in the mid-1990’s.  Not a ton of high-resolution telemetry relative to what’s available today, and not a real strong set of automation tools.

Thus: Rain Main, with the reams of paper… When every second counted, and any latency can be interpreted as incompetency, what we needed, and badly, was an aligning methodology.

First step, as mentioned before, was the collection of requisite telemetry.  Built upon the foundation of our last insider breach, I was armed with what we affectionately referred to as the “HR Hall Pass,” enabling me to broach any and all internal barriers to the collection of logs and events.  Again, there was organizational alignment from mahogany to the trenches.

I didn’t exactly kick in doors, but within the context of this particular situation, our “requests” were about as polite as you’d imagine: beg for forgiveness during the after-actions, emphasis upon action right now, emphasized with a curt “thank you,” because it’s likely I’d be compelled to return asking them to ratchet up log verbosity based upon our investigation.

Of course, what fueled our urgency wasn’t merely a helicopter HR and legal seeking immediate updates.  Then, as now, the risk of log and event loss was real, and nobody was eager to tell the FTC we weren’t able to provide an answer to their inquiry because “the logs rolled.”  Event loss can (and often does) lead to unemployment.

It’s important to note we weren’t after ALL telemetry. To be clear, I didn’t necessary need logs from print servers.  What I needed was telemetry from assets and applications most likely tasked with storing the proprietary information now being shared on public investment forums.

In contemporary terms this is referred to as Critical Security Controls (CSC) numbers 1 and 2 (formerly the SANS CSC 1 and 2).  In this case a filter for this data collection was in place for any unauthorized assets and applications associated with the now-leaked proprietary data.

With CSC 1 and 2 scope defined as we actively collected and reviewed this telemetry, I sought to manifest a pattern by seeking something I refer to as “the magnet which aligns the metal filaments.”

This is an admittedly ethereal concept, intended to convey the hypothesis that not all telemetry is created equal, and data representing a “magnet” in my allegory initiates an alignment which elevates patterns from the mass of data.

Again, this was the mid 1990’s, but at this point I’d caught and helped convict several people, and have the eye of a hunter have learned to seek certain varieties of telemetry more than others.  This is a heuristic, if you will, which has enabled me to catch others similar to the one we were seeking during this incident.

In this instance, I sought to identify:

  • What exact data was shared?
  • Where was this data stored?
  • Who had access to these data stores?
  • When were the data stores accessed, and by whom?
  • When was said data accessed?
  • Did the data stores possess security vulnerabilities enabling unauthorized access?

I refer to this as system and vulnerability state data, and again this is the magnet which aligns the metal filaments.

Without going into too much detail,the availability of said proprietary information was pin-pointed to a small number of data stores, distributed in a variety of facets via several email threads with a broader distribution than originally desired, topics most certainly discussed on and off-agenda in resulting meetings. In other words: We had a mess on our hands.

The good news was we were in possession of some of the “metal filaments,” but the bad news was that due the porous nature of inter-office communication, the vectors were numerous, and many weren’t captured in telemetry.

Nonetheless, system and vulnerability state data, overlaid with CSC 1 and 2 asset and application scope enabled us to narrow down our suspect pool to the low dozens.  Boom.

Certainly better than the corporate-wide scope of many thousands, but still…

Because said information was posted on a public web forum, I concurrently weaved an overlaying filter of incoming and outgoing network activity culled from firewall and proxy servers, then correlated it with internal router logs and DHCP servers’ activity.

A correlation between the following created a very compelling picture:

  • CSC 1 and 2 assets and applications
  • The “magnet” of system and vulnerability state
  • Log and event data, including network activity

And just like that, our suspect pool shrunk to less than ten individuals. At this point the tempo got very high.  It required great discipline to maintain proper pacing to ensure we captured and cataloged evidence which would help secure a conviction WHILE moving fast enough to stem the bleeding, so to speak.

Again, our task wasn’t merely to “find and mitigate,” we were also expected to:

  • Characterize the breach
  • Immediately identify if any variety of the breach is still occurring, from this point forward
  • Given breach characteristics, identify the first time said breach occurred (historical analysis) on which assets and how broadly
  • Outline a go-forward mitigation plan

From the perspective of characterizing the breach (aka: building the case), once we correlated key FOB event data, the DHCP capture of MAC address and aligned date-time stamps, we were able to reduce the suspect pool to one.

A small bit of gum-shoe slid into place the final piece.  We were able to order the suspect’s detainment, and summarize what went wrong.

It’s a little outside the scope of this particular post to detail the post mortem, but I should mention that the “make sure this doesn’t happen again” plan related to policies and procedures we defined to help effect an institutionalization of security practices.

This is where CSC #3, #4, and #14 come in:

  • (CSC 3) Secure configurations of hardware and software
  • (CSC 4) Continuous vulnerability assessment and remediation
  • (CSC 14) Maintenance, monitoring, and analysis of audit logs

Turned out, the “insider” was able to easily circumvent administrative checks and balances because key systems weren’t properly maintained (CSC 3), thus introducing vulnerabilities (CSC 4), which were logged in an incomplete manner (CSC 14).

The techniques and correlation that had to take place in this case were done primarily manually. Had we had Tripwire Log Center, this entire process would have been almost instantly, and more than likely would have detected suspicious patterns before the data was released on public forums costing the company millions in lost stock value and a great deal of embarrassment.

On a personal note: I spent fully two years stubbornly trying to get into Tripwire, refusing to take no for an answer.  I fought hard to become part of Tripwire, specifically. Why? I am deeply passionate about what we do because I’ve experienced security incidents without the force multipliers Tripwire provides in its converged suite of security solutions, spanning fully 14 of the CSC top 20 Critical Security Controls.

Boom. This is what you need.

 

Note: Register today for our upcoming webcast on February 12, 2014 at 10:00 AM PST/1:00 PM EST, How to Restore Trust After a Breach, where Tripwire’s Chief Technology Officer Dwayne Melancon will provide participants with an approach to restore trust in your critical systems after a data breach. 

Also: We invite you to a free trial/demo of Tripwire Log Center, a log intelligence solution that helps detect attacks and breaches early by correlating Tripwire’s foundational security controls.

 

Related Articles:

 

Resources:

picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].

 

picDefinitive Guide to Attack Surface Analytics

Also: Pre-register today for a complimentary hardcopy or e-copy of the forthcoming Definitive Guide™ to Attack Surface Analytics. You will also gain access to exclusive, unpublished content as it becomes available.

 

Title image courtesy of ShutterStock