Skip to content ↓ | Skip to navigation ↓

I just ran across this article by Brian Krebbs talking about AV vendors beginning to move towards a “Whitelist” or “Known good” strategy due to, as Brian states, the amount of malware being produced outpacing the amount of good software being produced. Personally I find this kind of amusing (but not in a sarcastic way).. I worked for a large AV vendor back in the 90’s and we had developed exactly this sort of technology to address the proliferation of Word macro viruses for a few clients but never went on to fully productize it in the commercially available solutions. I was nonplussed by that then and remained so over the years since hence why I have to chuckle a little at the idea finally seeing the light of day over 10 years later.

Despite my amusement though.. in thinking about it over the last several years and working with technology that enables you to detect critical change well before it gets blacklisted (or whitelisted) I see several problems with this idea ever really working in large decentralized environments. Who is going to manage it across the enterprise with all the inevitable technology silos? Do we really think software vendors and AV vendors can keep up with the constant (and increasing rate) of bad files and programs out there? according to a reference in Brian’s article, Bit9 has indexed 6.2 billion programs that are available online? That is impressive to be sure, but maintainable and sustainable? Not to mention that whitelisting does not take into consideration configuration files that govern application operation and security.. more often than not damage and security breaches are a result of mis-configuration not ‘bad’ files IMHO.

Think about this now in the context of the virtualized datacenters.. what a nightmare it will be to keep the whitelists up to date for all the virtualized infrastructure that goes up and down constantly, the multiple iterations of software releases, constant patches and so on. Unlike physical DC’s where servers tend to slowly come online, Virtual DC’s make it far easier to quickly bring up multiple virtual servers to load balance apps, to do pre-production testing, etc.. all those systems have to be protected while online too right? Securing your virtualized environment means thinking about ways to ease this pain.. not increase it. Controlling change and configuration at the hypervisor and virtualized management layers is one clear way to decrease some of this pain.

Any competent security group will have a defense in depth strategy and one of those defenses better be something than can detect critical file change *and* configuration change as it happens. That should be your early warning system assuming that you have the solution tuned to your environment and the systems / applications that host your business critical processes. Relying on malicious code detectors and perhaps “Whitelists” alone would be very risky, albeit they should be a part of your defense in depth but not a first line strategy.