I am all about the baselines. I’ve made an entire career out of them. But if you were to ask a random person on the street what that means, the reaction would be: “Who the heck are you, and why are you asking me random weird questions.” So it would be better if you found someone in the tech industry at least.

The Early Days of the Baseline Question
In the old days (all of 20 some odd years ago), the question about baselines centered around performance at scale. IT professionals talked about clusters of computers (physical ones even) that needed to be identical in order for the application to operate correctly. Or in the event of a failover solution or disaster recovery scenario, they noted how servers or applications needed to be identical to ensure that businesses would continue to exist. The question of performance at scale even extended to the hardware layer. Hard drives, network cards, power supplies needed to be identical in order to be redundant.
The Introduction of Software
Then came the idea that not only did the hardware need to be identical but so did the software. That’s where products like Tripwire came in. How could you tell whether or not the files or other objects on the asset were the same or different, not only on a single server but across several? You baseline them. You essentially take a snapshot of the objects. With that baseline in place, you can tell not only if something has changed on the asset itself but also if you have the original “image” to compare it to, allowing you to see how it's changed.

The power that this functionality provides can then be extended beyond the original asset. How does the software on one server compare to the software on all of the servers around it? Server admins now had a way to measurably test whether or not the servers in their cluster are identical in all of the important ways. Disaster recovery specialists could now ensure that their assets at the disaster recovery site are identical to the servers at their primary production site. My old boss Gene Kim had an old saying: Servers should be like fuses. When one tripped, you should just be able to replace it. We made our bones on this concept. Telling when a server or an application tripped so that the admins could quickly move to mitigate the issue. Years ago, you might have heard terms like “Golden Image” or “Standard Image” bandied about as admins kept a library of known good operating systems ready to deploy as necessary. Tripwire was the solution to help them ensure the integrity of those images with our baselining capabilities. Even so, Kim also had an interesting thought experiment. What would you rather have: one thousand secure servers but all configured differently or 1000 mostly secure servers but configured the same? Gut reaction, of course, is to select the secure servers, but think about the administrative costs of managing 1000 separate servers. That takes us to the evolution of baselines and the computing industry as a whole really.
Tripwire and the Baseline’s Evolution
For years, Tripwire’s sole focus was on taking that baseline of your assets. But we made no judgment as to whether or not the baseline was good or not. When changes were detected, we didn’t make any distinction as to whether it was good. Then we introduced Security Configuration Management (SCM). Based on security benchmarks such as the Center for Internet Security (CIS) or PCI for credit card processors, SCM was predicated on the idea that Tripwire could now tell you if your system was configured securely according to one of these standards. Now we could help you decide whether your baseline was a good one. We essentially turned the concept of baselines on its head. It didn’t matter if a specific DLL or specific version of an application was the same on all your servers. It only mattered if they were configured the same. “Is telnet disabled across all of your servers?” “Do you have logging enabled?” “Is your business-critical application configured correctly across all of our core assets?” Those were the questions that SCM allowed Tripwire to answer. Now when customers talk to me about baselines, they are still thinking of our traditional way of taking snapshots of files and other objects. I often see it on RFP’s and other requirement docs. What I like to introduce to them is the idea that yes baselines and hashes are still valuable. But the real power comes from SCM. A baseline doesn’t just have to be a list of known good hashes on a server. A baseline could also be a standard list of configurations that your organization has decided to measure your assets against. It could be CIS, PCI, SOX, ISO:27001. (You name it, we probably have it.) Or it could be one of your own invention. The point is that it’s far easier to manage to a standard than it is to manage the many permutations of any given operating system/application/hardware combinations that are out there.
The Impact of DevOps
With cloud computing on the rise and containerization right behind, the term “DevOps” is now the industry catchphrase. For years after the dot com crash in the early 2000’s, there was a concerted effort to get developers out of production. “Separation of duties!” was the rally cry. With the evolution of baselines towards more secure configurations, we are seeing the pendulum shifting back towards allowing developers pushing changes at a faster pace into production. As long as it was secure or correctly configured, it largely didn’t matter what the changes were at a file level. Gene’s dream of servers being like fuses has largely come true with containers. If an application has gone bad, you can just quickly replace it with another container. Developers can now even test their containers as part of their workflow with tools such as Tripwire Enterprise to make sure that a minimum threshold of security and operability are met before it is allowed into production. So now when you ask your random question about baselines, you will hopefully get more answers about configuration baselines if you’re lucky.