As some of our readers know, Tripwire has been co-leading a concerted effort to migrate Security Automation specifications into the Internet Engineering Task Force. In this post, I’ll tie the effort to what your business needs – Operational Risk Management (ORM). To put this another way, it’s not about the standards, but about what the standards enable. So, we’ll start at the top with ORM, and work our way down to where the rubber meets the road.
The first thing I want to do here is set the stage by defining the problem domain – this is what ties our standardization efforts directly to the needs of your organization. (By the way, most of this post draws from the Use Case presentation I gave last week during the Security Automation and Continuous Monitoring Birds of a Feather meeting at IETF 85 in Atlanta.) At the heart of the argument is this:
All organizations have at least one common goal – to minimize loss.
The concept of Operational Risk can provide focus, and the Basel committee defines operational risk quite well: “The risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems, or from external events.”
Then, to manage operational risk (to “do” ORM), you seek to minimize operational loss.
Look at it this way. No one wants to lose something, whether it be a tangible thing, access to a service, a limb, a loved one, or, in the case of most organizational entities, money. Organizations manage inflows and outflows of money to operate at some revenue level supporting their mission over time (the usual goal is in perpetuity). These inflows and outflows are managed with policies, processes, and procedures designed to minimize loss (this is the basis for my assertion above).
Of course, ORM is a
large huge umbrella. There are subcomponents. Some organizations manage risk to their supply chain, most manage financial risk, some manage very specific risks to their organization (fuel price, for example). Organizations leveraging information technology will likely manage risk to information (Information Risk Management, or IRM). IRM is really about minimizing loss as such loss relates to managing information and information technology. Most often, organizations will use one (sometimes several) of many Control Frameworks, which specify Controls to put in place for minimizing risk to information and information technology. Figure 1 depicts the relationship between ORM, IRM, Control Frameworks and Controls.
Riding along each level of our risk management stack are policies, processes, and procedures. In effect, these comprise the systems emplaced by an organization to minimize loss, and this is where vendors (such as Tripwire) and standards organizations (such as the IETF) find many of their requirements. Figure 2 emphasizes the importance of policy, process, and procedure.
Control Frameworks and Controls are important as they’re getting down to where the rubber meets the road. In fact, our primary area of concern on a day-to-day basis is ensuring that the Controls demanded by Control Frameworks are being met, so our focus is as highlighted in Figure 3.
Noting that our focus does cover Control Frameworks, though perhaps not completely. Most Control Frameworks operate on the Observe, Orient, Decide, Act (OODA) cycle, which is better known to some as Plan, Do, Check, Act (PDCA). This provides us another way to define our actual problem domain (to differentiate it from the larger, all-encompassing domain of ORM). One way to depict this loop is as having four steps: Plan and Organize, Deliver System Security, Monitor and Evaluate, and Improve and Adapt (this, by the way, is based on a survey of several popular Control Frameworks, as shown in Figure 4.
The proposed SACM working group will (I am an optimist in this case) be focused primarily in the area highlighted above, which represents continuous monitoring – that automated, continuous checking and acting. The more continually your organization checks and acts, the more effective it will be managing operational risk, and in order to provide better continuous checking and acting, standards are really needed – there are too many tools required to manage all Controls.
If you have the time, take a look at the 20 Critical Security Controls to get some idea of exactly what an organization will ultimately want to continuously monitor – it represents a lot to do. There is no one product, service, or vendor that’s going to help your organization maximize its ORM effectiveness. We can conclude that the only way to help organizations – our customers – is to look for better ways of working with other tools in the ecosystem. It’s a fairly straight line to realize that the best way to have this happen – from a customer’s perspective – is to be as plug-and-play as possible within that ecosystem. To plug-and-play, solutions need to communicate with each other out of the box, or with minimal configuration.
This means standardization is required, and it is this standardization with which SACM is concerned. And the IETF community apparently agrees with the idea, because we are moving forward and should have a Working Group formed very soon. We are presently working on (essentially at the request of the IETF community) explicit requirements, a notional architecture, use cases, and charter. I’m confident that this WG (again, I’m optimistic that the WG will be formed), in a coordinated effort with other IETF WGs (and even other Standards Development Organizations), will start making real progress on real problems real soon (for real).
Now, I’m not saying that standards are the silver bullet. I’m not saying that ORM, IRM, or Control Frameworks and Controls are the silver bullet. No. I’m saying that cohesive operational risk management ecosystem (that which is filled with dynamically collaborative products using standardized interfaces and data formats) is the silver bullet. You might say we believe we know where the silver bullet is, and we see a path to get there. It’s up to us to walk the path.