Skip to content ↓ | Skip to navigation ↓

Tripwire, as a company, made its bones on the ability to monitor and detect changes to file systems and other devices. For years, companies would purchase our software based upon that well-deserved reputation.

The reasons were across the board: compliance – we were named in the original Visa CISP before it became PCI; operations – our capability to backtrack changes to troubleshoot outages; and security – Gene Kim, one of the company’s founders, wrote Tripwire to detect the Morris worm.

But all of that is history.

In the intervening years, thousands of companies have bought and used File Integrity Management (FIM) software for all of these purposes. Some had great success, while others seemed to just muddle down the middle. So, what makes a good FIM deployment versus a muddling or bad deployment?

How change is handled and how it was escalated. In other words: process.

But lately I have been rethinking the change detection process. Change happens, and it happens a lot. Obviously, not all of these changes are made by your average pirate either. Most of the modifications are made by your standard issue system administrator going about their daily business.

As an auditor, I learned to dig into the process of change. And it can all come down to a simple question I would ask. I would point at a random server in the datacenter and ask:

“If someone made a change on that server, how would you know and more importantly, what would you do about it?”

Good deployments of FIM software were well integrated into a process—requests made, tickets approved, tests done, backup and remediation plans written, changes implemented and post change debriefs done. FIM software would detect these changes, map them back to a ticket and they were done.

Bad deployments had little in the way of process to map change detecting software to. Changes were made in a somewhat drive-by fashion where spray and pray was the order of operations.

When the FIM software would detect these changes, there was no way of mapping them to any sort of known event. The process boiled down to someone shouting over a cube wall, “Hey!  Who made these changes?”

In many of these cases, the detecting software often got the brunt of the blame.  FIM was too noisy.  It was too difficult to use or tune. It took too much time to administer on a daily basis, but seldom would the fingers of blame be pointed back towards the dysfunctional operations team or the woefully understaffed security group.

That self-reflection seldom occurred… until the auditors came, or a breach occurred.  And then, every so often I would see one of our former customers in the news after a massive breach and think we could have detected that.

Yet, one day I had one of my good deployments come and tell me: “We are not sure we are getting enough value from our monitoring software. We just can’t do anything with this much change—but if you could help us with breach detection…”

This was the final catalyst to shift my paradigm away from change detection and into the breach detection mindset. Who cares about detecting planned changes? Mapping them to tickets and process is fine but requires a massive amount of work and planning. What if you could ignore all of that and only look for anomalies… for breaches?

That was what we took into meetings with my customer – and the FIM software wouldn’t be alone in this endeavor. In order to deal properly with breach detection and advanced persistent threats, a number of players had to be brought to the table and to work together.

At its highest conceptual level, FIM became the host-based sensor detecting any and all changes to their file systems specific to binaries, libraries and things that could run malware. We would then ship this data off to their SIEM, which would then correlate it against other network events and a dynamic list of known bad hashes of malware or hacker tools. Anytime something matched, the bells would go off and people would jump—all of it in real-time.

As for the rest of their changes? We still detect and record them but no longer alert or process them. Reports are still generated for change metrics and for troubleshooting and forensics but the administrative workload has decreased significantly.

As a company, these shifts to the idea of breach detection versus change detection became the genesis for many of the initiatives you see coming out of Tripwire today.

We have partnered with several companies that have cloud-based versions of what we did on premise for my customer. Now changes that we detect can be sent to advanced persistent threat detection services for analysis.

Ultimately, for my customers, good change process or bad, who cares about known change? It’s the detecting breaches and changes from malware we need to be concerned with.

Related Articles:

Resources:

picCheck out Tripwire SecureScan™, a free, cloud-based vulnerability management service for up to 100 Internet Protocol (IP) addresses on internal networks. This new tool makes vulnerability management easily accessible to small and medium-sized businesses that may not have the resources for enterprise-grade security technology – and it detects the ShellShock and Heartbleed vulnerability.

picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].