In the mid-nineties, I used to have a technology column that ran in the Intranet of the bank where I started my career.
The first article introduced the concept of the DMZ and suggested using the 3rd network interface of the Servers hosting our half-a-dozen brand new TIS Gauntlet Firewalls. The idea was not particularly new at the time. Instead, using firewalls to create internal DMZs to isolate critical segments of the network and databases was a fairly new idea.
The vulnerability scanners of the time were simple point and shot solutions often based on freely available software from the Universities’ file repositories. Usually, we would install the scanner on a Unix workstation or PC and manually scan specific networks locally or across the WAN. All in all, it was a simple, rudimentary process.
Fast-forward 17 years. Many large organisations have thousands of Firewalls and hundreds of DMZs, plus very large policies in the firewalls and ACLs in the routers to segment a fast growing and complex environment.
Moving any data across the organisation’s network often requires traversing multiple security domains and chained DMZs. On top of this, most large organisations have deployed load balancers, WAN optimisation technologies and other acceleration and quality-of-service solutions that tear the data into pieces and reconstruct it many times over.
If you are trying to scan for vulnerabilities on any IP-enabled device across the internal network this leads to a very interesting and challenging question: Are the scan packets traversing the network to interrogate the device reaching their destination intact enough to successfully profile the device? And if not, does it matter?
The answers, based on what I have seen in the past, is “no” and “yes” respectively. Modern vulnerability scanners tend to do a good job of sending packets across the network in a non-intrusive manner to profile an IP-enabled device.
The scanners use a variety of techniques and protocols to achieve this task. However, even simple quality of service (QoS) techniques can interfere with the packets sent by the scanner, resulting in distorted streams of data packets.
For example, QoS solutions can potentially drop packets or slow them down. This often results in timeouts and misinterpretation of the data flows sent by the scanner. In a similar way, load balancers and WAN optimisation appliances can create nearly random interactions between the scanners and the target devices resulting in additional inaccuracies.
Very often, when inspecting a new vulnerability scan, it’s not obvious that the vulnerability data is inaccurate.
The issues with WAN optimisation and IPS technologies described above can be identified and planned for when the organisation designs their vulnerability scanning strategy and the deployment of the scanners across the network.
For example, scanners can be whitelisted with the WAN optimisation and network IPS appliances. This can also be done in the desktops’ HIPS’s console to allow scanners to get an unrestricted view of the actual vulnerability risk of the asset.
This leaves QoS as one of the remaining hurdles to successfully scan across a complex network as it occasionally causes packet loss.
Troubleshooting QoS packet loss is challenging because the behaviour might appear to be quasi-random. This is compounded in hosted and outsourced scenarios where the review and optimization of the configuration of QoS policies is not an option for the organisation.
There is no simple solution. However, a potential workaround is to deploy additional scanning appliances on the same local network as the target assets. This requires adequate planning to balance the accuracy of scans with the overall deployment and running costs.
In addition to the initial design challenges, vulnerability management processes need a few years to mature and adjustments need to be made to adapt to changing network conditions.
Title image courtesy of ShutterStock