Skip to content ↓ | Skip to navigation ↓

At this point, the Center for Internet Security’s Security Controls are an industry standard for technical cyber security. The first six basic controls can prevent 85 percent of the most common cyber attacks, and even though the controls have been developed with traditional data centers and process in mind, there is no reason they can’t be adapted to DevOps.

A quick review of the basic controls will offer some ideas of how they fit for DevOps (or not, as the case may be). Control 1 and Control 2 are about inventories: inventory and control of hardware and software assets.

Hardware in a cloud environment becomes virtualized infrastructure – containers and repositories, virtual machines, lambda functions, microservices, APIs, etc. The principle still stands, however, that knowing what underlying infrastructure is authorized in your environment and detecting unauthorized deployments is critical.

In a highly dynamic cloud system with ephemeral servers being deployed and destroyed constantly, it’s even more important to know whether what is coming up should be coming up and whether it is authorized to be in your environment. Consider your current controls and how you monitor your runtime environment. How do you know what is running is authorized to be running and is running authorized apps and services?

If you don’t have an answer for those questions, review your threat model and think about tools at your disposal to get a better view of your “hardware” assets. Those may be native to your cloud environment, third-party solutions and/or an inventory management process you point in place.

Software inventories also offer an interesting challenge that would traditionally be managed with a CMDB and ITSM processes. For some parts of DevOps, this could still work. It’s a good practice to have a defined set of authorized and standardized tools for the pipeline.

On the other hand, a cloud environment may be changing so rapidly that maintaining a software inventory is impractical. The key is controlling for risk. Knowing what is running in your environment and whether it is intended and authorized also means you know if something shouldn’t be there. That software or service may not be malicious. It may simply introduce additional risk, and removing it would be a good thing.

One possible application of this that I find useful is whitelisting what applications are allowed on containers. A tool like Tripwire for DevOps can scan a container for running applications, and a rule can enforce a quality gate that says only certain applications are allowed. Say it finds SSH running but SSH isn’t necessary for the application. This could cause the quality gate to fail and a developer to remove this application, reducing the attack surface and creating a cleaner build.

The third control is vulnerability management (VM), and I talked about this measure in a previous blog post with regard to detective and preventive controls. VM should be implemented as a quality gate before deployment, and it is critical at runtime, as well. Knowing what vulnerabilities are in your environment and remediating those vulnerabilities as often as possible keeps your applications and service secure. DevOps is particularly suited to rapid patching and updates, so this control fits in neatly with the paradigm and should be built in from the beginning.

The fourth control is limiting and managing administrative privileges. Ideally, administrative privileges are already extremely limited and controlled since there should be little need for elevated access in most cases. DevOps tools, like Hashicorp’s Vault, have some nifty ways to manage access and processes. These include checking out privileged access keys that expire in a short amount of time, thereby limiting risk exposure. This is consistent with the zero-trust model, and limiting what can be done, by whom, and when, all limit attack surface in a very tidy way.

Secure configuration is the fifth control, and, once again, this should be something that fits really nicely into the build pipeline. Assessing the configuration of builds prior to production deployment is another quality gate that can be built right into the pipeline. For instance, if a virtual machine or container has a policy that it must comply with CIS benchmarks, assessing that compliance should be a quality gate. Runtime assessment should also be included to monitor for configuration drift and the cloud environment itself should also be continually assessed. Something like Tripwire’s Cloud Management Assessor is a good way to monitor the underlying infrastructure.

Finally, log monitoring and analysis is the last of the basic controls. There are a number of cloud-native logging tools for the various cloud platforms, and many of those already have specific security monitoring built in. As with other controls, the pipeline itself shouldn’t be neglected and application, access and other logging should be implemented for build and deployment tools. This not only provides detective controls for things like unauthorized access, but it also provides forensic evidence for incident response or even operational tuning when things aren’t going exactly as planned.

For more information on how Tripwire for DevOps and other Tripwire solutions leverage the CIS Controls to protection organizations, click here.

<!-- -->