Before I became a systems engineer a few years ago, I worked in the industry as a technical security manager for over 15 years, focusing on computer forensics, incident management and compliance.
During that time, I witnessed security vendors move from securing the endpoints with a focus on anti-virus software to monitoring and analysing network traffic across corporate networks with intrusion detection systems and evolving into intrusion prevention systems.
Then, of course, there’s the recent move to detonation of user traffic to detect malicious objects or exploit attempts from drive-by download attacks.
However, in recent years, I have started to see the move back to monitoring and focusing on the endpoints—the target, and in my opinion, for good reason, too!
To be clear, I’m not suggesting monitoring traffic across the corporate networks is a bad thing, nor am I suggesting that we should stop monitoring networks. But, like most things, it does have drawbacks. Network monitoring solutions largely rely on signatures to detect anomalies on the network, with the exception of a few security vendors who focus on the zero-day attacks.
This is all well and good, but how about the target itself?
Does your average employee remain on the corporate network all the time, thus have the protection from network monitoring?
Or do they take their laptop home with them, visit coffee shops, go online at airports, access networks where the network monitoring is not within your remit or being monitored?
Let me put this another way… Imagine there is a road in a residential area that is full of houses of high value. On this road are a number of CCTV and property security cameras, placed there to detect criminal behaviour.
A burglar enters the road. Would you be able to spot them? They’re not likely to be dressed up in a thief costume, and carry a bag with the words SWAG on their back – that would draw too much attention to them and may trigger an alert to their presence.
No, this burglar is going to be smart. They are going to avoid detection at all costs by attempting to act normal. They will approach the houses with caution and not draw attention to themselves. The burglar knows there will be countermeasures in place and will want to avoid detection at all costs.
Now let’s replay that analogy in the corporate environment. The last thing an attacker wants to do is go in all guns blazing and trigger alerts and have their access blocked. They will want to avoid detection and remain as stealthy as possible.
The attacker will use obfuscated exploits that are hard to detect or use zero-day attacks to bypass signature-based systems. They will use the route of least resistance possible, and it may be as easy as scanning a target in a coffee shop, or sending a phishing email.
The goal is to compromise the target, and they would have achieved this by not being detected.
In recent years, I believe the focus is moving back towards protecting the endpoint itself – the computer the attackers wants to gain access to; the computer that contains that sensitive customer data; the computer that belongs to an employee that has privileged access; the computer that belongs to senior members of the organisation that may contain information on company confidential information.
Who is the target though? It’s not necessarily a person. It could be a fileserver, database server, employee devices, such as laptops and desktops, Point-of-Sale terminals, etc.
And what about the network devices, such as routers, switches and firewalls? With companies moving to virtualisation, how about the VM infrastructure? Could that be a target, as well?
How would we know a system has been compromised if the attacker wants to remain stealthy? Well, one thing to detect a presence is to monitor the endpoint and detect what has changed.
Let’s step back to the burglar in the street. They will compromise a house by gaining access to the property by least resistance, perhaps breaking a window. If we were monitoring that house, we would see the baseline being a property with no compromise but as soon as a window has been broken, an alarm has been triggered notifying the authorities.
The focus should be on the critical files that shouldn’t be modified or removed and driving a workflow when a change is detected. Traditionally, this process has always been known as “FIM,” File Integrity Monitoring, but now we don’t just focus on files.
As mentioned before, the changes could be on our router configurations or unauthorised changes to our firewall rules; it could be unauthorised changes in our directory services, or it could be an unauthorised change in a database schema.
When a change is detected, the impacted system needs to be back online in the shortest period of time, so not only do we need to be alerted on the change but we need to be told what exactly has been changed, and by whom.
The ‘whom’ bit is equally important as the ‘what’ element. It may not be a malicious employee, but an employee whose system has been compromised and their account used by an attacker.
When I joined the Tripwire team last year, I was pleased to see we had an industry-leading enterprise software that will do exactly what I feel that’s needed on the target.
Tripwire Enterprise will help protect the critical targets. It will help detect, contain, analyse and remediate. But where this enterprise tool differentiates with others is the capability of focusing on the critical areas that need to be monitored and show you a side-by-side comparison of what changed in a file, and showing you who did it and when.
As a former computer forensics investigator, this information would be of great value. Not only from investigating an employee’s perspective but to understand how a system was compromised in the first place and by what method.
As part of the analysis process, Tripwire will leverage threat intelligence sources to help identify malicious objects that have snuck onto the target and will help you get back to your known good state through remediation. Repair the endpoint, revert to a safe configuration, and remove unauthorised objects.
In summary, I think we should be focusing more on the target. We should be responding to changes detected on systems, network devices, directory services, database content and schemas, as well as virtual environments.
Title image courtesy of ShutterStock