Why Roman Emperors are security relevant (CAESERS FE and InfoSec)
Caesar Augustus was the Roman Emperor whose legacy is what most people remember when they think of a Caesar. In particular, because as children, most of us learned that he ushered in the Pax Romana and expanded the Roman Empire a lot while creating a bunch of standards that improved the quality of life for the average Roman. In fact, he was so relevant to what we think of as the modern world (and aware of his impact) that in fact, the month of August is named after him. (Fun facts you might not remember about this particular Caesar, beyond the Caesar Cipher – he created Rome’s first institutionalized police force and fire fighting force, used his wealth to get road maintenance done in Italy and installed a courier system of relay stations.)
The specific thing about Caesar Augustus that bridges Rome and Security is about how the use of standards allowed for a continuous improvement cycle to start. Specifically Augustus standardized money, language, weights and measures. With little effort, you can see how having those particular items standardized enabled a lot of good things to happen; and created a framework that others built on for centuries, often in ways that were unanticipated. Two specific domains directly and obviously impacted by this were science and engineering; where the ability measure items and their interactions reliably is rather significant, and ultimately lead us up the path to the evolution of computers. Sometimes it’s also not about what you can do, it’s about what you avoid – think the Mars Climate Orbiter.
In the security industry, there is a framework that has a very fortunate name for my mental association: Caesars Framework Extension; which amazingly is still an acronym for “the Continuous Asset Evaluation, Situational Awareness and Risk Scoring architecture Framework Extension”. The goal of this is to create an enterprise continuous Monitoring technical reference model. OK, so what does that mean? In a simplified fashion, it means that the goal is to create a standardized way (language, weights and measures) to view cyber security and IT through the lens of continuous monitoring. One of my favorite technical words is in the heart of this world view – interoperability, by which they mean that any provider of a security sensor or control would use the same language, weights and measure to discuss the individual units. Here, it might help to think about a common description of the relevant characteristics of each unique computer asset in your environment.
Why do we care about continuous monitoring? Because it could create the same evolution in computers that common weights and measures did for other scientific and engineering endeavors. In my opinion, if the industry really embraces it, it has the possibility of completely transforming the ways we talk about, measure and provide security in a risk modeled way. On the surface, it’s exactly what the name implies – the ability to look, in the right places all the time with the goal of providing situational awareness. In more detailed words (from NIST 7756), Continuous security monitoring is a risk management approach to Cybersecurity that maintains an accurate picture of an organization’s security risk posture, provides visibility into assets, and leverages use of automated data feeds to measure security, ensure effectiveness of security controls, and enable prioritization of remedies. That’s a fantastic prescription to help us improve the overall health of cyber security and eventually transition from continuous monitoring to continuous management.
There’s also an underlying importance to the ability to adopt these standards, and that’s in the way it allows us to design and create more repeatable results and evolve from them. Similarly to how standardized money, taxes, and weights and measures had obvious implications, so does the framework. It means the ability to make sure your controls are effective, where you have more than one way to measure them; and that way would work for all controls or sensors, regardless of vendor. Fundamentally it allows for the creation of new ways to create quantifiable security measurement. Some of the measurements may be obvious – “do I really have 99.99% uptime on my security controls?” Some measurements may be less obvious – “Are my historical controls continuing to share the right data in the right time lines to give me the best visibility into what my current situation is?” In addition, like what happened when there were stable standards to work with for financial things, I imagine that standards that drive security measurements will allow for whole new ways of seeing and improving the status quo. What do you think the most interesting evolution of this work will be?