Skip to content ↓ | Skip to navigation ↓

Just a few years ago, most IT environments were made up of deployed servers on which personnel installed applications, oftentimes as many as that one system could handle. They then remained and ran that way for years. In the meantime, the IT team maintained the system and updated the applications as needed. Sometimes there were test versions of those systems, but this wasn’t often. Even then, the OS often didn’t match the production version of the same system. The environment was fairly static, not dynamic, and changes only happened when updates were released. A lot of IT departments also assumed a mindset of “if it ain’t broke, don’t fix it” school of thought, so updates were often ignored, anyway.

Things started to evolve a bit with the dual advent of Virtual Systems and worms/viruses/hacking when systems became image capsules and needed to be updated more often. Companies started to deploy images, generally put one application on each image, and rolled out many more system images on one piece of hardware. The number of systems proliferated even if the number of applications remained the same. These images were still updated and maintained, and things changed more often to keep up with security patches, but these elements combined still did not comprise what many would consider to be a truly dynamic environment.

Over the past five or so years, companies have again begun shifting how IT resources are deployed and managed. There are several new methods for application deployment happening. These include the following:

  • Automated Image creation and deployment
  • Immutable image deployment
  • Containers

Each of these has an impact on security and management of assets. (Read up separately on CI/CD Pipelines to see how these images are created.) We’ll discuss each of these methods in our first post. We’ll then look at how to manage assets in a dynamic environment.

Asset management

Automated Image Creation

Automated image creation normally involves a deployment tool that instruments, deploys and maintains a system image. The configuration of the system is set, the applications needed by the image are installed and configured and then the image is deployed. In this case, the Tripwire Axon agent is usually installed by the deployment tool and configured. When the image starts, the Axon agents report to their console (whether that’s for TE, IP360 or TLC) and begin their checks. Any vulnerabilities or deviations from the expected configuration is detected and reported.

The IT/deployment team continues to update the running images as they would any datacenter system. OS and application updates are rolled out by the deployment tool. Any changes detected by TE can then trigger a new vulnerability scan by IP360. This way, the images start out secure and compliant and remain that way over time. Additionally, vulnerability scans happen both on a scheduled basis (when a new vuln DB is released) or when a change is detected (did the change result in a known vulnerability?).

Immutable Image Deployment

Like automated image creation, immutable image deployment normally involves a deployment tool that instruments, deploys a system image. But in this case, no updates are ever made to the image once deployed. When an update is required, a new image is created, the software is installed and configured into the image, the running image is destroyed and the newly updated & created image is deployed.

No changes are ever allowed to the running system, so detecting an anomaly becomes much easier. Any change effectively becomes an incident.

Since the image is not supposed to change while it’s running, you can scan the image before putting it into the image store and ensure that it’s configured correctly (SCM checks) and doesn’t have any serious vulnerabilities (Vuln checks). Tripwire can do this scan ahead of putting the image into use with Tripwire for DevOps. Tripwire for DevOps allows you to send an AMI into a SaaS module where the image is started and scanned for vulnerabilities.

Even on a system where no change is allowed, you have to ensure that no change happens, that vulnerabilities are not present and that the image is configured securely. New vulnerabilities may be published while the image is up and running, and you need to be made aware of them.  If a change is made to a system, (there are ways to get around the “immutability” of an immutable system) you need to be alerted to that.

Tripwire Enterprise monitors the OS and Application system files, registry entries and other system state information. If there is a change detected, an alert can be generated. If new vulnerabilities are published, IP360 can scan the running instances to see if any are present.

TE can check the configuration settings against your policy to ensure a correct configuration at the time an image is instantiated (spun up). Just because your provisioning toolsets all of the configurations and keeps them set doesn’t mean they are the right settings. Tripwire is the verification step needed for separation of duties in the environment.

Containers

Containers are normally immutable systems, but they rely on another product (such as Docker) to provide most of the operating system functions. At a fundamental level, a container is an image that is usually made up of some parts of the OS and single application. Changes are introduced to containers by generating an updated image of the container, rolling it into production and destroying the currently running version(s) of that same container. In other words, a running container should not have changes made to its application or OS files.

Since a container rarely has enough of the OS to install something like the Tripwire Axon (though it can, and some companies have done this), you need a way to ensure the containers are secure, as well. Scanning the containers before they are put into a container repository to ensure they start out as secure and then scanning them on a periodic basis while in the repository is a good start.

Next, you need to secure the container runtime environment, whether that’s Docker, Kubernetes, OpenShift, etc. Unless you’re using a service like FarGate, you’re responsible for maintaining the security of the container environment. That means CIS checks and change control of the application. A Tripwire Enterprise agent on the system running Kubernetes or Docker with the application files baselined and checked against CIS standards will ensure the environment stays secure, thereby ensuring you know if anything changes. By having IP360 check the running and resting containers as well as the system they’re running on, you can help ensure that there are no serious vulnerabilities.

Ensuring the containers themselves are secure before deploying them is called “Shifting Left,” where the security checks in the deployment process are moved to the pre-deployment step of container creation. Previously, most security checks were done after the system was already in production! (Well, hopefully in Q/A.)

By having the program that does the container build send the container to Tripwire for DevOps for CIS and Vulnerability checks prior to saving it to the container repository, you can ensure the containers are secure before they are ever run in production.

Many customers have an expiration date for their containers, as well. A 15- or 30-day expiration for all containers ensures that they are scanned anew and that any patching and application updates are done on a regular basis and that container is destroyed and fresh containers are deployed if any malware did make it into a container.

Stay tuned for the second part of our two-part series.