Skip to content ↓ | Skip to navigation ↓

2013 was the year of the Snowden Leaks. This year, the battle to dominate the headlines is being closely fought by the Target Breach, the Hearthbleed bug and eBay’s compromise followed by a massive password reset – and we can rest assured there will be more headlines hitting the wires in the second half of the year.

While Snowden’s case was about a security-cleared individual leaking confidential information, some of the 2014 incidents were about defects in software code being exploited to gain unauthorised access to IT assets and information.

In other words, the attackers exploited IT security vulnerabilities in software, possibly well-known ones easily detected by network vulnerability scanners, to take control of the assets and / or inject malware to assist them to do so.

After many years of working with vulnerability management solutions, it still puzzles me whether vulnerability management is still relevant as often old vulnerabilities continue to be exploited to gain unauthorised access. Is the technology or the vulnerability management processes that are lacking? I would venture a bit of both.

First, vulnerability management (VM) should be part of an overall risk management process to avoid becoming irrelevant to the organisation. Scanning for vulnerabilities creates a report with no initial relevance for the overall organisation.

The IT Security risk should be assessed first using a framework that is relevant for the sector of the organisation and that can be implemented and managed with the available skills. Frameworks that leverage VM include for instance the Council on Cybersecurity’s 20 Critical Security Controls and PCI DSS.

Second, all assets are created equal, but some quickly become more relevant than others. Before the security scanners hit the network, it is critical for organisations to define not only the scope of the scanning process, but also the value to the organisation of every asset in scope. Prioritising key assets according to organisational needs is a good first step to make a VM program relevant.

Third, I would like to add a new sin to the list of capital ones: knowing about a specific vulnerability in your IT infrastructure or organisation and doing nothing about it. This happens very often, in every sector. Megabytes of PDFs with vulnerability data are generated and often little is done with this information. There are many reasons for this process failure.

First, it is still very difficult to integrate IT Security processes with the ones managed by IT Infrastructure; for example, the team responsible for VM often has limited access to the remediation teams that own the patch management tool.

A typical solution is to link both parties using a ticketing system and a formal change control process. Building such a process takes time, but it is an elegant way to integrate VM, patch management and change control.

Finally, the list could go on and on, and blogs have specie constrains. However, over the years I have seen that both carrots and sticks are very effective in developing a successful VM program. One of the best carrots links the MBOs or the performance-based bonuses to reductions in vulnerability risk.

It works very well, although organisations tend to delay most of the remediation work until the time when the MBOs are measured e.g. quarterly audit. My favourite stick is to openly distribute the vulnerability risk Leader Board of the every organisational unit.

Who thinks VM is irrelevant when you are the one at the bottom of the pile?


Related Articles:



picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


Title image courtesy of ShutterStock