Hi, I’m your friend and security researcher, Pete Herzog. You might know me from other public service announcements such as the widely anticipated, upcoming workshop Secrets of Security, and critic’s choice award winners: Teaching Your Teen to Hack Police Cars, and Help! My Monkey is Posting Pictures to Facebook!
But I’m here today to take a moment and talk to you about the pain of neglect, isolation, abuse, and infection, better known as “vulnerability management”. In many ways vulnerability management can be part of a healthy system and over-all good security. But there’s many important differences between vulnerability management and security that you should know about:
It’s Like Managing Tunnels In an Ant Nest
You can’t manage vulnerabilities in closed software any more than you can manage tunnel construction in an ant nest. That’s because there are the ones you know and the ones you don’t know and the ones that somebody knows about but they don’t want you to know about them. Those last kind are the vulnerabilities that come from somebody who took the time to deconstruct, decompile, analyze, and emulate to find new attack surfaces that you didn’t even know were surfaces.
So because you can’t really get a handle on what vulnerabilities there are you can’t intercept them through patching, no matter how fast your time to patch is. And as long as developers release software and entire operating systems as functional for all values of commercially marketable (which means “not quite done yet”) with the intention that you are buying a snapshot of gradually corrected software rather than an application then you’ll always need to work with vulnerable software. But that doesn’t mean you have to buy into the vulnerabilities part.
At least not in the part that you worry about security. Think of it this way: your wallet isn’t secure either so you make it secure. You only upgrade your wallet when you want to. You can do the same with software. But, just like you can’t leave your vulnerable wallet around outside your direct control without the contents disappearing, you can’t do that with your applications either.
It’s How Flame Proof and Flame Resistant Are the Same If There Are No Flames
Managing vulnerabilities will not get you security. Especially since patched vulnerabilities is a subset of found vulnerabilities which is assumed (for far too many) to be a subset of having security.
You scan for vulnerabilities so you can patch those vulnerabilities to get closer to security. But you can’t. Because it’s a subset. Just like dogs and cats are a subset of “house pets” which is a subset of “domesticated animals”.
But if you wanted to have all the domesticated animals on your new arc you can’t do it by only looking house pets as that would exclude goats, cows, horses, yetis, and many animals maybe you don’t know or didn’t consider. So when scanning for vulnerabilities you can only, at best, find the vulnerabilities the scanner knows about. Then after false positives and false negatives, there’s far less vulnerabilities it can report.
So your vulnerability assessments can’t include all the other possible things from the superset, excluding what you don’t know or didn’t consider. Then the vulnerabilities you fix can only ever be at best equal to the number of known vulnerabilities but likely never will be outside a perfect, error-free laboratory simulation. And then, if your vulnerability management depends on the patching of those vulnerabilities, which means someone had to make and release a fix (unless you’re capable of coding your own patches but then you’re probably so busy you’re not reading this), you will have even less which you can fix.
So the possibility that a patch exists for every known vulnerability is going to be an even smaller subset than patchable vulnerabilities. Therefore, vulnerability management can never reach a level of what can be the same as having security. Some say it will get you closer to security and less of a target than those around you but that’s like saying there’s no need to be flame proof as long as there are others who are more flammable than you are.
It Can Feel a Lot Like Doing Dishes
Vulnerability management is an endless race that can’t be won. It’s more likely the developer will stop supporting that software long before it will ever be completely vulnerability free. You will always be chasing down vulnerabilities, racing villains against the latest patch release, and putting out user-fanned fires. So unless you enjoy that it will feel like a lot like doing dishes at a fast food restaurant (meaning: a crappy, routine job where your co-workers are often demanding and impatient).
Or, if you get off on it as a heroic routine, then it can be as thrilling as a fireman in Matches Town during the summer magnifying glass and mirror parade. Which is why wanting your security to be a process is a bad idea. Because you may want better security but you’ll be too busy with problems to be making better security. And so if that’s not your thing then vulnerability management might be as repellant to you as the smell of burnt hair (which, incidentally, is an actual perfume in Matches Town).
So security as a process is busy busy busy. Like bees. Like beavers. Like motherhood. Because it’s not just a process. Singular. It’s not “the security process” like you do it and get good at it. It’s not like the adult film star process which pretty much gets you from film star to adult film star by doing just one thing on film.
No, there are many security processes from patching to back-ups to disaster recovery to testing to incident response to system hardening to… you get the idea. And there’s always more that we need to deal with because, you know, time is linear and makes sure things get old, brittle, disorderly, and worn. So if you go into vulnerability management you do it because you like the pain of being really busy and doing reports on showing how busy you are.
You Can Do A Little More To Do A Lot Less
So now you know that vulnerability management is just one part of the security process but many don’t know that it’s not the only way of protecting software, it’s just the second-most tedious way.
Which means you can actually choose to do other things that have the same results but without the pain. The problem is that many HAVE TO do vulnerability management as per some regulation or policy.
This means they should just add these other things to their vulnerability management process to actually reduce the pain. (For those interested in trivia, the first most tedious way involves a lot of copy/pasting formatted text between spreadsheets while braiding short hair and standing in line at the DMV.)
How can adding more stuff to already busy vulnerability management process make the job less busy and stressful? Sounds like a fad diet where eating more of something makes you lose weight, right? Well, it’s possible because of how vulnerability exploitation works.
To exploit a vulnerability you need to target that vulnerability and then you need to access that vulnerability. In movie cop speak, “That’s the motive and opportunity for a perp.” So in offline space what happens with vulnerable stuff? You protect it. You move it to a safe place to keep eyes and fingers off of it. Just like your wallet. This is called adding operational controls.
I know the word “Controls” gets used a lot and especially in the infosec space, controls are thought of as processes. In that case, “controls” makes people think of these things they should be doing to improve security, like security awareness trainings or vulnerability management or patch management. But what I’m talking about is operational controls- the things you apply and not the things you do to assure a particular type of protection.
So back in the offline world, using the privacy control would move your wallet out of sight and into your front pocket to reduce opportunity. Adding the authentication control would put the wallet in a place that requires authentication to open with some key like a safe. These same controls exist in cyberspace as well and can also be applied to control how something is accessed.
Why this matters is because when you manage operational controls as part of vulnerability management you can actually take yourself out of the rat race of patch vs. exploit. That’s huge! That’s a way to make yourself a lot less busy. By determining which operational controls are missing, poorly implemented, or not properly functioning you can tell if something needs to be patched or not and not just now but like, ya know, forever. For real. (It’s OSSTMM research stuff in case you want to read more about it or come hear me explain it in Richmond, VA and get free candy.)
Filling a Hole Has Never Been So Dirty
We think vulnerability management is straight-forward: there’s a hole and you fill it and the hole is gone. That’s a great and simple visual. Makes it seem so logical, right?
But that’s like saying space travel is just exploding a ship up until you’re in space. Or like saying baseball is just hitting a ball with a stick to go running around a clearly marked rhomboid path until you get back to where you started from.
As you can see, these make for great and easy visuals but there’s actually much more nuance and complexity behind these things that make them a lot harder to execute properly than it seems. Well, maybe not baseball, that might actually be just like that… But vulnerability management is much harder and it’s been cursed with having this really simple idea. I suspect there’s an evil advertising agency out there behind this. Or just a normal advertising agency. Same thing.
To be realistic, let’s look at a small office of 50 people which means a huge variety of different software with hundreds of thousands of interacting components, applications, and systems across multiple channels from people to wireless to telephony, designed to carry out specific actions without interruption. These all need to be checked against an ever-growing list of known vulnerabilities while they’re running and maybe actively protected with security software that may prevent someone from checking if they’re vulnerable.
Then you need to determine the ones that are really vulnerable and not just false positives and then what? Patch, right? Remember the simple visual? You got a hole so you fill it. But patching isn’t filling a hole. Or covering one. Or stitching one. It’s more like gene therapy. Which, incidentally has the same simplistic visuals problem so maybe that doesn’t help at all.
Anyway, you are actually adding something and changing something to correct the failure in an existing something. Sure, it’s not as dangerous as gene therapy but you are still changing what was there which means it is no longer the same thing it was before. Which means, like gene therapy, if something goes wrong then it can range from a hiccup to a meltdown. So you should be testing it first on non-critical systems but, there’s that time problem again, that thing you don’t really have, so should you skip it and just call it a risk decision?
Which is why sometimes you don’t patch. Or you don’t test. Or you just leave the whole thing on auto-update and know that since so many other people do it that if something goes wrong it will affect many more companies than yours and you can just blame the company for sending a bad patch.
Which, incidentally, won’t get you back any losses but finger pointing feels good. Unless you own the company and then the damage will feel personal and finger pointing won’t relieve any pain and suffering.
But you can always work towards getting all your controls in place so you can patch when you want to and not because you have to before the thieves get the opportunity. Just throwing that out there.
Then again maybe you’re too busy for all that and hire a consultancy to do that for you. So maybe they will run the patches on auto-update because they also can’t do it all and they have even less skin in the game and a whole lot more places they point fingers, complete with canned presentations that explain with graphs who should be blamed.
Afterward they run a vulnerability scanner to find the weaknesses that the auto-update patching doesn’t handle, checks passwords, and verifies that all the systems are properly set for auto-updating. While that’s all set-up they work on adding new graphs to their “Who to Blame” presentation which now includes a table on “If a hacker really wants in they will get in…” which serves no other purpose than to point fingers away. Because it won’t get you more secure.
So really, a patch isn’t just filling a hole. It’s more complicated and time consuming than all that to do it right. It’s more time and resources than most have. And if you let it overwhelm you then you need to rig the game to keep up. Or else you will lose. In the end, playing dirty is the only way most vulnerability managers can keep their heads above water. But let’s just call that a risk decision.
It’s Not Some Kind of Joke
Vulnerability management can be a pain. It can be a huge pain. But it is necessary. It just isn’t necessary the way the hype says it’s necessary. The hype way (scan, patch, repeat) is the way you lose eventually- could be sooner, could be later, or it could be a little loss each day that becomes a big loss in aggregate. So it’s a serious issue. You need to be clear that vulnerability management is not security. Worse, if you implement patches without proper testing it may actually hurt your security. So this is not some kind of joke.
But this is:
A vulnerability scanner walks into a crowded bar. It orders a drink. The bartender says nothing. It orders again. But again, nothing. It starts poking the bartender hard on the chest but the bartender ignores it and instead serves the guy who shows up next to it. So it walks through the thick crowds and starts harassing the DJ, the cocktail waitress, the piano player, and the guy selling weed in the corner. But they all ignore it. So it leaves and goes back to the guy who sent it in. “What happened?” he asks. “Nothing,” it replies. “Place was dead.”
A vulnerability manager walks into a tiki bar and scans the room. “You not gonna ID me?” he asks the bartender. But before the bartender can answer, he goes to the jukebox. “You need to update these records.” Bartender says, “We don’t use that jukebox.” “Doesn’t matter,” he replies then points at the ceiling. “And these light bulbs need to be replaced.” Bartender tells him they work fine but they keep them off during the day. But he quickly replies, “Doesn’t matter. They’re weak. Someone could break them.” The bartender points at the open flames on the tiki torches around the room, “What about them? Fire hazard, right?” The vulnerability manager just shakes his head and says, “Doesn’t matter. That’s out of scope.”
And don’t forget to see me in Richmond, VA from June 4 – 6 at RVAsec and join my workshop there and see my presentation- it’s my only show this year planned in the USA!
About the Author: Pete is the co-founder and Managing Director of the security research organization ISECOM. Pete creates and develops advanced methodologies, guidelines, learning materials, tools, and certification exams in security, trust, and anti-fraud. His most well known works are the international security analysis and testing standard OSSTMM (Open Source Security Testing Methodology Manual), Hacker Highschool, the Bad People Project, the Home Security Guidelines, the Secure Programming Guidelines, the RAV Attack Surface metrics, and the all new Security Awareness Learning Tactics guidelines. Pete’s research is referenced in over a thousand academic and industry papers and books, applied internationally at all levels of business and government, and closely followed by NATO, NIST, FBI, NASA, NSA, all branches of armed forces, and the White House. He also provides security and trust analysis as a consultant for next generation R&D technologies and processes that have no formal means of testing yet. Additionally, Pete’s background includes epidemiology field team work and support for ATSDR and the CDC, and research in neuroscience, child and teen development, and psychology which he applies to the projects he’s involved in. He is an avid hacker, speaker, and teacher, having taught for years in Barcelona at La Salle University URL and ESADE MBA Business School.
Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.
- Traceroute is Not a Vulnerability
- Revenge of the Geeks
- NETGEAR Wireless Router Configuration Guide
- Adapting Vulnerability Management to Address Advanced Persistent Threats
The Executive’s Guide to the Top 20 Critical Security Controls
Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].
Title image courtesy of ShutterStock