According to CIS, “Organizations that do not scan for vulnerabilities and proactively address discovered flaws face a significant likelihood of having their computer systems compromised.” While vulnerability management (VM) isn’t new, I’ve seen it evolve a lot over my 22 years in the industry. Here are some big trends:
Assets are Diversifying. Fast.
The idea of an asset has changed and grown over the years. Back in the ‘90s, it was a PC or a server. Then came laptops and mobile devices, and now we have containers, thermostats, watches and more. These assets may not be running on a full operating system. They may all have different interfaces. So how do you look for vulnerabilities across different assets? Traditionally, you have three options to look for vulnerabilities: you could install an agent, scan the devices remotely and analyze the traffic responses or use credentials to log into the device. While these are still valid techniques, they do not always work on IoT devices, so now we also look at API calls and management software queries to determine the state of assets.
Agents Making a Comeback
Where do agents make the most sense, and where is agentless still the better method? In a sense, agents have come back around. Years ago, they required large amounts of disk space and even more memory. Many of us remember AV agents slowing down systems to a crawl. Now they can sit on a device without eating up as many resources. There are two places they’re especially good:
- Critical servers: You can set up agents on critical servers to do assessments in almost real-time so you can find out right away when any changes happen or any new vulnerabilities are introduced.
- Laptops: Since these aren’t guaranteed to be on the network during a scan, having an agent on these devices means you can scan them anytime. This is especially helpful with remote workforces.
A Case for Credentials
In the early days of Vulnerability Management, a lot of the focus was just on remote exploitation and remote scanning without credentials. Remote scans are going away for the most part. Remotely scanning without credentials is still useful but only for a small subset of vulnerabilities like protocol vulnerabilities. Scanning for other vulnerabilities in this manner leads to too many false positives or potential vulnerabilities. Credentials open up a lot more possibilities. You can log on with credentials to check files, registry keys, RPM versions, etc. We used to see a lot of remote scans that dealt with device exploitation, which is frowned upon these days because you don’t want to bring anything down. So, checking in a way that’s safe for the device—that can be done in production—is important. Unauthenticated remote checks can also lead to more false positives.
Relief from False Positives
Luckily, authenticated and agent-based checks provide for more accurate results. You quickly lose credibility with your system admins if you provide a long list of vulnerabilities that end up being false positives. False-false positives are another issue, where something that looks like a false positive at first turns out to be a real vulnerability. The research team at Tripwire finds that over 90 percent of initial false positives are real vulnerabilities. It helps to have a Vulnerability Management solution that shows you details of the check that was run and the response from the host. It also helps to have access to a research team can answer any questions. You can listen to the podcast here and learn more about Tripwire’s Vulnerability Management solutions here.