Skip to content ↓ | Skip to navigation ↓

For as long as I can remember, there has been a driving goal amongst security product consumers to up-level data to some form of management dashboard. Ideally, this dashboard will tell everyone from the top to the bottom of the organisation exactly what they want to know, the way they want to know it.

From GRC solutions, SQL based dashboards, SIEMs, Big Data and a variety of other tools this has so far been difficult (at best) to achieve. However, I think we’re missing a bit of a trick with regard to how we should be using the data from these individual security solutions. Going upwards is not the only way of gaining value.

If we were to look at the SIEM market (and Big Data to an extent) then the theory is to gather as much information as possible and then derive context around incidents from that sea of data. The unfortunate side effect of this is that getting to that correlated point is hard.

Correlation and immediacy can make difficult bedfellows as scale increases. The more data obtained, the more processing and complexity is required to get any value from the data itself. Big Data solutions allow for more elastic searching and fun stuff, but it tends to be less immediate than is required by, say, a SOC.

To add to this problem, most businesses require availability of systems over anything else so putting more load onto an endpoint just for logging is going contrary to what the business requires. Ultimately there needs to be a balance to how we monitor. We can’t monitor everything all of the time or, we can, but for relatively little extra value.

Let’s take a look at how we currently tend to monitor things. Typically there are a variety of monitoring solutions on a system that are usually driven by some form of policy.

The applied settings are inherited through the categorisations of assets in some way on a management server somewhere. This could be by ‘operating system’ or ‘installed application’ or ‘network location’ or any number of other applicable terms.

These categories are often pretty static. However, in the lifetime of an asset there are whole variety of variable attributes that need to be taken into account. These attributes could change on any frequency and this, in turn, could re-categorise those assets into something far more critical to an organisation.

For example, I have customers that have used vulnerability risk, asset value, application ID, CIA ratings etc. to categorise their assets and then track how they change over time.

So, what we need to do is change how we monitor systems or, at least, change the way we think about monitoring systems. All of the individual monitoring systems hold data that the others may find useful.

Another example, if I see a critical vulnerability on a critical server, I may want to immediately ramp up all of my other controls on that system; white listing, file integrity monitoring, audit settings, associated firewall logs, automatically create correlation rules within my SIEM or any number of other extra bits of fun that you typically wouldn’t overlay onto that system.

Perhaps I may want to dynamically change the priority of events based on similar attributes… if an asset rates critical then anything from my HIDS system should be treated as high priority within my SIEM. This isn’t anything new, of course. Organisations I work with are already doing this in a number of ways.

For example, monitoring subnets that are assigned to their PCI scope for new devices and then layering File Integrity Monitoring onto those systems automatically delivering compliance to section 11.5 of PCI DSS. Or defining differing levels of compliance to a policy based on the level of vulnerability risk; if system is highly vulnerable then I expect the configuration to be watertight, no excuses.

By allowing products to talk across each other and have a direct impact on the method in which those products perform provides a far more flexible way of achieving value with the constrained resources that security teams have available.

Being able to find out that an event took place 6 months after the exfiltration of data because you’ve turned up monitoring everywhere and been flooded is not a viable tactic (or, indeed, a strategy).

The opening up of products and teams is notoriously difficult but, I believe, worth the effort to greatly improve the efficiency of the solutions in place and really drive that terrible word we all hate: ‘synergy’.


Related Articles:



picCheck out Tripwire SecureScan™, a free, cloud-based vulnerability management service  for up to 100 Internet Protocol (IP) addresses on internal networks. This new tool makes vulnerability management – a widely recognized security best practice among large corporations – easily accessible to small and medium-sized businesses that may not have the resources for enterprise-grade security technology.


picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


picDefinitive Guide to Attack Surface Analytics

Also: Pre-register today for a complimentary hardcopy or e-copy of the forthcoming Definitive Guide™ to Attack Surface Analytics. You will also gain access to exclusive, unpublished content as it becomes available.


Title image courtesy of ShutterStock