Last week, working with a prospect on a proof of concept, we got into a pretty interesting discussion on agentless verses agent based technologies. The technical staff are looking at ourselves and a competing vendor to help tighten up their security policy and ability to detect changes in the environment for audit and control purposes.
As all competitors do, the other technology company involved had sold hard on their apparent strengths, the fact that they used agentless technology instead of Tripwire’s agent based approach.
At the core of their argument were the following points
Significantly faster to implement
Having multiple agents on servers overload the system
Scale more easily to cover large numbers of assets
Interesting but let’s break it down a bit to find the truth.
Significantly faster to implement – Most decent IT departments now have a way to centrally distribute software. Agents install silently, without the need for a reboot and can gather data significantly quicker than an agentless technology. I’ve never had customers complain about how long our agents take to deploy.
Having multiple agents on servers overload the system – This is simply FUD. Tripwire have been designing agents since 1992 (when Gene Kim was in short pants) so our ability to do everything we need to without adversely impacting the server is based on 16 years of experience. Our realtime agent only impacts to a maximum of around 2% CPU and our collection agent takes less than a minute to grab all the configurations for a CIS policy on a half decent server. We are dormant a lot of the time as well, literally listening for changes but only impacting less than 1%.
Scale more easily to cover large numbers of assets – This is laughable. Imagine scanning 1000 machines via the network for configurations on servers to run against a policy like CIS. CIS on a Windows 2003 box has around 170 tests that it runs, some are checking the same object like RSoP but most tests are individual files or registry keys. So each time you scan for compliance you are hammering your network and servers for tens of thousands of configurations. We approach this differently, we cache the last known good state on the agent and only transfer changes to that state up to our enterprise server. This drastically reduces the network load and has a huge added advantage – we can scan continuously rather than monthly or weekly. Using an agent, we can identify changes to the compliant state in seconds rather than weeks and this mitigates a whole boat load of risk. But configuration assessment is not the only piece to this, remember the customer also wants to monitor for changes to configuration and binary files.
This lead to a rather interesting discovery due to the fact I just couldn’t figure out how the competitor could hash thousands of files to tell you what had changed without an agent. So I asked the prospect what they had said to him concerning change audit and apparently they install microcode every time a check is run and then delete it after the check has finished. Err hang on, so they install code EVERY TIME. They know they need an agent to capture change data but instead of leaving it installed and getting the advantages of having a resident agent, they constantly install and delete code on the server and, even worse, to accomplish this they need to store a privileged account in their centralized management system. That’s just crazy.
Using an agent, if they are written well and perform as expected, is a far better approach to configuration assessment and change audit. We don’t unduly impact the network or monitored device, we have the ability to collect data more quickly and efficiently, are hugely scalable and can be deployed just as quickly as agentless technology with the added advantage of being more configurable, mitigating risk due to scanning continuously and not impacting the network for huge amounts of data.