Last week, I sat in on a briefing by a guy who calls himself “Four” who happens to be involved in intrusion detection for Facebook. He shared some interesting perspective at the Black Hat conference through a discussion of “Intrusion Detection Along the Kill Chain.” The information Four presented is based on the work done by Eric M. Hutchins, Michael J. Cloppert, Rohan M. Amin, Ph.D of Lockheed Martin in their paper, “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains,” – certainly not new, but I think Four used some good real-world examples to drive his points home (I liked it, at least).
If you’re like me, your first question might be “What do you mean by a ‘kill chain’ anyway?” The paper defines a kill chain as follows:
The phrase “kill chain” describes the structure of the intrusion, and the corresponding model guides analysis to inform actionable security intelligence.
Four described it as a way to group disparate security “events” into a context that centers around the attacker and/or the attack. In other words, rather than trying to look at network security events in isolation, then looking at host security events as a separate population of data, you integrate them by grouping them according to attack vectors.
While the concept sounds kind of familiar, some of the things Four proposes were interesting to me. For example, he asserts that there are no undesirable events in his approach (to be more specific, I think he meant more that you don’t exclude classes of events – you welcome them all, due to their potential to add context). Within the streams of events you collect, there will be some events you ignore because you can’t associate them with a kill chain, but his notion is that you still want those events to come into your filters because some of them will inevitably add value.
I am a bit worried by the unintended consequences of this recommendation – more on that later – but I found it very interesting to think about.
Find issues early
The intrusion kill chain breaks intrusions down into distinct phases, which are defined quite well in the Lockheed Martin paper:
- Reconnaissance – Research, identification and selection of targets, often represented as crawling Internet websites such as conference proceedings and mailing lists for email addresses, social relationships, or information on specific technologies.
- Weaponization – Coupling a remote access trojan with an exploit into a deliverable payload, typically by means of an automated tool (weaponizer). Increasingly, client application data files such as Adobe Portable Document Format (PDF) or Microsoft Office documents serve as the weaponized deliverable.
- Delivery – Transmission of the weapon to the targeted environment. The three most prevalent delivery vectors for weaponized payloads by APT actors, as observed by the Lockheed Martin Computer Incident Response Team (LM-CIRT) for the years 2004-2010, are email attachments, websites, and USB removable media.
- Exploitation – After the weapon is delivered to victim host, exploitation triggers intruders’ code. Most often, exploitation targets an application or operating system vulnerability, but it could also more simply exploit the users themselves or leverage an operating system feature that auto-executes code.Installation – Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.
- Installation – Installation of a remote access trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.
- Command and Control (C2) – Typically, compromised hosts must beacon outbound to an Internet controller server to establish a C2 channel. APT malware especially requires manual interaction rather than conduct activity automatically. Once the C2 channel establishes, intruders have “hands on the keyboard” access inside the target environment.
- Actions on Objectives – Only now, after progressing through the first six phases, can intruders take actions to achieve their original objectives. Typically, this objective is data exfiltration which involves collecting, encrypting and extracting information from the victim environment; violations of data integrity or availability are potential objectives as well. Alternatively, the intruders may only desire access to the initial victim box for use as a hop point to compromise additional systems and move laterally inside the network.
The goal is to use the “kill chain” to help you develop capabilities that allow you to identify attacks earlier in the kill chain, rather than waiting for late-stage attacks to become apparent. I like this, as it is very consistent with my views on focusing on early indicators vs. lagging indicators in breach detection. In other words, develop capabilities that help you identify intrusions while they are still in phases 1, 2, or 3 – and the lower the number, the better.
Of course, the challenge is to figure out where and how to look to find the early indicators. From an infrastructure monitoring perspective, he seemed to rely a lot on Snort to find suspicious network events. From a user perspective, Four recommends looking for things like phishing emails, antivirus infections, and things like that. I find these recommendations to be necessary but not sufficient.
My chief concern with Four’s talk is that people may have come away from the session with a false impression that they need to alert on a whole bunch of crazy events to be more effective. I don’t think that is the case and, in fact, this is the problem everyone has had with SIEMs for years – too many “informational but not actionable” events, which eventually cause you to turn the darned thing off.
I would like to see more focus on identifying key “marker events” that can help us focus our alerts and investigations in areas that are more likely to yield results. For example, looking for suspicious system state & configuration changes (including suspicious new listening ports, new user adds, and new services that suddenly appear on systems) as these things will almost always be associated with nefarious and/or irresponsible behavior.
In any case, I encourage you to read the paper and share any insights you glean from it. Also, I’ve ordered a copy of the Black Hat Briefings recordings and will share any additional insights after I hear the talk again (the room was so crowded I had a hard time taking detailed notes).