I often write blogs based on what crosses my inbox during a week; and recently I saw just enough articles on who security should report to that I thought I’d select it as the topic du jour. (Much like Adam, I rarely seem to get to writing these early. This isn’t a new topic, I can read 2003 up to 2009 articles that talk about the same topic, and I heard it at RSA this year as well. The value in talking about it again is that the themes aren’t really about the organization structure; if you peel the onion, they’re about the priority to the business and how to define and enable the desired responses, and there doesn’t appear to be a silver bullet answer.
I would posit that there are a set of underlying questions that all the recent breaches drive when contemplating what it means to be secure today. These questions shape the mission that the security team is responsible for, who they must partner with and ultimately, who and how they report upwards.
- What do I need to protect? (PII? Secret corporate recipe? Source code? People? Financial data?)
- How do I make sure security happens? (Tone from the top? HR and Tech?)
- What happens if I fail? (lawsuit? Regulatory fees? Brand impact?)
- How will I know I’m safer a year from now than I am today?
Those questions just scratch the surface on the system that makes up whether security will be successful in any given environment, or how it should be oriented. Thinking about what you need to protect leads to recognizing what kind of regulatory requirements you may be subject to. What is the structure of data, and where is it stored. Those things can lead to contradictory answers on reporting. When you factor in “how do I make sure it happens”, that adds lots of often unexpected complications. That’s before adding best practices like “separation of duties”.
An example of that complication is tied to the theory that since Information Technology owns the hardware and networks that contain the sensitive data, they should own the security of it. That level of statement doesn’t take into account making sure that the missions are contradictory (if IT is measured on uptime of Windows machines that often require reboots to install patches, which wins?). In addition, it’s an axiom of the security industry that humans are the weakest link, and most organizations would not be comfortable having Information Technology drive how humans act, or trying to drive culture change in an organization.
When you wrap in the question about failure, this brings up the idea of incident response teams; which can push organizations both to and away from their Information Technology departments. The familiarity with the systems, the networks, and hopefully the deltas between “normal” and “abnormal” behavior are selling points for IT. However, many smaller organizations don’t have the resources to have dedicated business continuity / forensics or legal/communications/PR functions as part of a fully dedicated response team; and if something did go wrong, there is an element of “who watches the watcher” that can push away from IT.
Lastly, the question about measuring and managing security can toss a wrench in the works. Like the question on watching the watcher that comes from incident response; there is an organizational tendency to want to make sure that there is some kind of audit capacity. With all these factors competing to shape the structure, it makes perfect sense that there are a lot of approaches for how to structure it. Like with many things, it feels like the best way to decide it for your company is to get a mission defined that the stakeholders can agree on, and work from there with the stakeholders for how to best support the goal inside your organization, with a plan to re-evaluate and potentially adjust depending on what is, or is not working. What do you think? What organizational structures work really well, or not so well, for you?