We have many new infosec professionals joining the workforce every year who have been schooled in product configurations, best practices, and vendor whitepapers as to what security is. Everything they know about protection has been carefully cultivated through marketing relationships with colleges and training centers with insights specially prepared and spoon fed through sponsored news headlines.
What does it mean?
After the third part of our article series Is Your Security Hurting Your Security came out, we received many questions and comments on what it all means. Questions like: What products do you need to protect something with a complicated attack surface like a mobile phone? What new tech exists to get us out of the patch rat race? How do you balance security if you don’t have the money or resources? Are you a robot?
If you haven’t seen the articles, check here:
Part 1: Three Ways Your Security is Actually Hurting Your Security
Part 2: Unbalanced Security is Increasing Your Attack Surface
Part 3: Security Solutions that Fight for the Same Resources
I think this kind of marketing-based security has completely changed the landscape of what most expect from security and, in this interconnected world, what we all will get for security – but in a bad way.
Now there are companies that are marketing responsibly and doing better than most to match their products to the actual security landscape. Yet, for the most part, you’ll see vendors who are manufacturing a particular landscape, painting an illusion, and then showing how their products make that imaginary landscape more secure.
Now you might think, who will fall for that? Or maybe you think, the cybersecurity landscape is so bad who would need to make anything up? Well that’s marketing for you. It’s called market differentiation and it ensures a company can compete against other products of similar nature while keeping their own customers buying more of their same products they already have.
A good example of market differentiation in action is the use of segmented classifications. This lets a vendor sell more of the same type of products by segmenting the types of threats by what they are (classification) rather than what they do (operational end effect).
In the consumer space which many are familiar with, we see this:
“How do you prevent viruses?”
“With anti-virus software
“How do you prevent spyware?”
“With anti-spyware software”
“How do you prevent hackers hiding files on my computer?”
“For that buy our antispywareware and set it to high heuristics. That will give you total protection. But hackers will get in if they really want to.”
In these marketing-fueled imaginary landscapes you may be shown a world where you need different software for different ways attackers can manipulate files on your computers. That is despite the fact that there are well documented and very finite ways a file can be changed. And there are well defined and specific integrity controls to counter file manipulation.
But in this marketer’s dream world you are prompted by one company to buy anti-virus and at another buy anti-spyware and so on when what you really need is file integrity maintenance. Now, let’s put the logistics on home users implementing integrity controls and their inability or unwillingness toward whitelisting aside, one software could still incorporate all those types of file manipulations, even if still using a blacklist. And we know because they exist! But that just spawns even newer dream landscapes so there’s a place for the next product.
I don’t think most security professionals aware of this even have a problem with using those same marketing tactics to sell security. After all, every industry has their marketing tricks like this. But does every industry integrate their marketing stories into regulations and universities?
At medical school do they teach you to prescribe maximum strength headache medicine to middle-aged women who are juggling careers and parenthood as the commercials suggest? Or do they actually instruct dental students which gum to recommend for their patients who chew gum as the commercials suggest?
Well, they do that in the security industry.
The worst thing in all of this is that these made-up landscapes have gotten so much marketing hype that they are believable for many people as real and true. They, including many security professionals, believe that the scenario, products and the processes have become default requirements for every computer in nearly every situation.
That is how they wormed their way into regulatory standards (aka “product catalogs”) and are taught in universities. From there they became the basis- the foundation- of further infosec research and thereby becoming what information security is supposed to mean. The hype in infosec has become the meaning.
This has become industry wide. Here’s some examples:
The meaning in firewalls was originally to reduce human resources (and errors) for hardening systems by choking a central point. This was sometimes worth the trade-off where having a single point of failure was preferable to the human errors (and resources) involved in hardening many systems correctly. But then came the hype to push each technological advance in the firewall.
Now we’re sold the firewall as a default necessity (and regulatory requirement). Yet you’re told it’s CRAZY not to have one despite today having roll-out tech and virtualization which allows for a way to do system hardening and mass changes from a single operation point and eliminate the single point of failure. Of course one can say that brings a new attack surface challenge of its own… in which another one can say has already been solved in network segmentation and server-side, controlled authentication – and it doesn’t have to cost any more cash to add or change routing info.
Then there’s the Web Application Firewall which originally took the idea of the firewall combined with the IDS and applied it at a different layer to web apps. This worked because web technology had gotten so complicated and spun in so many different ways through competing tech that the WAF’s gave a feeling of control back to the admins who need to wrestle with web technologies and a regular storm of patches.
But through hype it became the solution for Authenticating good traffic versus bad as it came and left the website. Which it kind of does despite the fact that it actually does so poorly and at the expense of increasing the network’s attack surface. Yet there it is in neon day-glo on LEGAL MANDATORY REGULATORY frameworks like HIPAA, SOX, PCI….
Another standard we’re stuck with due to the hype is antivirus which went from a virus clean-up tool using signature blacklists to a real-time virus clean-up tool still using signature blacklists. To be fair, AV also does other things now like lock down certain areas of the system by asking the user if they’re sure they want to click that, report how many bugs matched signatures, and use heuristics to assume more things are viruses because the blacklist signature is similar but not exact. Which is why heuristics often leads to the occasional newly updated system file getting quarantined and thereby crashing the system.
But heuristics does detect more malware than blacklist signatures alone. And it does this all by just increasing the attack surface a little (a lot if you allow auto-updates) and throttling system speed and memory (the marketing way of saying “slowing down”) because it competes with the same resources of what it’s protecting.
Then there’s network monitoring. It’s where the struggle to turn network life into big data for threat analysis has succeeded immensely despite inheriting the combined problems of firewall’s single point of failure, AV’s “throttling,” and WAF’s poor ability to authenticate good traffic vs. bad. It does this while being as efficient as can be when asking humans to react timely to things moving at near the speed of light.
What most security implementers don’t think about is that infosec monitoring includes making a new out-of-band attack surface – the humans. So just to mention again for clarity, with network monitoring, you’re expecting humans to react timely to packets which travel near the speed of light. Even if it’s to verify and react to alarms sussed out by algorithms, which, by the way, makes the human the tool. And it still means you are purposely introducing human error into the mix. So as you scale more you fail more.
So what does it all mean? It means we need to stop addressing security in terms of “what works” and instead focus on knowing “how it works” for our infrastructures. We shouldn’t be getting security solutions because of their name but because of what they do and how they work.
We need to find where the interactions are across the network, the systems, and the people to make sure they are separated or controlled. We need to remember that penetration testers will only be able to tell us if some of the security is working from a particular perspective but what you need is to know where all the interactions are as that will show you where your attack surface includes the VPN authentication to the maintenance firm you hired (but it could if companies asked pen testers to map their attack surface in their reports as shown and required in the OSSTMM).
We need to stop sharing our “likely already out of date” network map topology with infosec consultants and pen-testers and instead hand them the latest business consulting reports on efficiency, vendors, and partnerships to help them grasp and secure the things that interact across channels in our organization. Addressing security this way will get you much further than the current regulatory product catalog will and for much less money.
As a bonus, this is the kind of thing you need to do to address BYOD, insider threats, phishing, social engineering, fraud, deception, and so many of the other problems that seem too difficult for companies even if they have money to burn.
The truth is that a whole generation of infosec professionals created on a steady diet of vendor whitepapers and product hype will not save us from cybercrime. Company executives need to be smarter in what they ask of infosec personnel and penetration testers. They need to implement solutions based on what they do to protect and not the hype. And they need to think of security in terms of a quality of controls over interactions and not quantity of vulnerabilities discovered.
If it’s money that’s bending what the infosec industry offers now then it’s going to be money that makes it right. So executives, demand quality. And while you’re at it, support the OSSTMM, the open research manual for analyzing how security works. There’s a class coming June 2014 to RVAsec that you will want to send your security and IT people to. It will be worth it!
About the Author: Pete Herzog is the co-founder of ISECOM, and as Managing Director is directly involved in all ISECOM projects. In 2000, Pete created the OSSTMM for security testing and analysis. He is still the lead developer of the OSSTMM but has also leads the organization into new research challenges like Smarter Safer Better, the Bad People Project, and the Home Security Methodology. Pete’s strong interest in the properties of trust and how it affects us and our lives has led to trust metrics and has brought ISECOM more deeply into Human Security. In addition to managing ISECOM, Pete taught the Masters for Security at La Salle University in Barcelona which accredits the OPST and OPSA training courses and Business Information Security in the MBA program from ESADE which is the foundation of the OPSA. In addition to security, Pete is an avid Maker, Hacker, and reader.
Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.
- My Company Needs to Buy Me an Oculus Rift
- Beyond Products and Services: Conversations that Should Have Already Happened
- Security Response Part 1: Don’t Shoot the Messenger
- Big Data: Big Money, But Little Value Part 2
Check out Tripwire SecureScan™, a free, cloud-based vulnerability management service for up to 100 Internet Protocol (IP) addresses on internal networks. This new tool makes vulnerability management easily accessible to small and medium-sized businesses that may not have the resources for enterprise-grade security technology.
The Executive’s Guide to the Top 20 Critical Security Controls
Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].
Title image courtesy of ShutterStock