Moving to the cloud means a lot more than just moving your servers and applications to the cloud; it’s also about the data – and data always has a target on it.
A lot of IT departments are finding that it’s easier to meet the “five nines” (99.999%) of uptime and availability by going outside their organization and letting AWS, Microsoft, or Google handle the infrastructure and personnel needed to meet those requirements. Plus, if your applications need configuration information or a place to store data, then you will be using cloud-based storage.
This is especially true if your data is in containers, on cloud-managed operating systems, or even in server-less applications (like AWS Lambda functions or Azure functions).
Change Management in the Cloud
This also means that you’ve moved the problem of tracking changes to important files from on-premises to the cloud. Ah, but you’re thinking – AWS, Azure, etc., tracks changes for me in cloud trails or logs, and that is true. However, that’s not the same as having a before-and-after view of the changes, an easy-to-report-on audit trail, a way to check those changes against a change ticket, and a means of comparing changes to configurations against a hardening standard.
Are your cloud teams creating change tickets for important changes to configurations/information in the cloud? If not, why not? Who or what is keeping track of whether changes are authorized and what may put you at risk?
Have you ever tried to determine authorized versus unexpected or unauthorized changes in a log stream? It’s not easy, and it’s also why change tracking using a baseline is used for operating systems, databases, network devices, etc. Creating a baseline that files are currently set to (or a known “good state”), as well as a mechanism to track deviation from that known good state, is just as important for files kept in cloud storage as it is for your on-premises files.
When the files in cloud storage are configuration files for your cloud-based applications, those should have both change tracking and hardening guidelines monitoring them. For instance, AWS has a lot of configuration settings for securing your S3 buckets beyond just whether they are public or not. There are plenty of questions to consider:
- Do you have encryption turned on?
- What about cross-region replication? (Should those even be turned on?)
- Are you using S3 object locking? How is it configured on newly created buckets?
- Should versioning be turned on?
All configuration settings have security and cost implications. Knowing what they’re set to and if they’re in compliance with your corporate standard across your AWS accounts (or the equivalent in Azure) is important from both a security and cost perspective.
Best Practices for your Cloud Environments
In terms of best practice, I strongly recommend turning on encryption for your data in cloud storage. If a bucket is accidentally made public and an unauthorized individual gains access to your data, encrypting the data with keys that are maintained outside of the cloud accounts is a way to ensure that nothing leaks. Given the way that cloud storage companies establish their data lakes, the associated permissions are the only thing that separates data from various companies. Of course, there should never be a case where access is mingled; however, since that part of the configuration is out of your control and totally in the hands of the cloud provider, encrypt your data.
Since the separation and access of the data and files that you’re storing in cloud storage are kept secure based solely on the configuration settings in your cloud account, you should be monitoring those configurations for changes. You should also test any changes against a reliable standard to help ensure that the encryption settings for storage stay set to encrypted. The settings for permissions are typically set based on corporate standards (for example, no “*” wildcards for permissions to buckets). When permissions on buckets and files change, someone or some process should be reviewing those changes for security and compliance as well as ensuring that the change was authorized.
In working with a number of customers, there are some interesting behaviors or trends that I’ve noticed with regards to cloud-based storage setups. For example, configuration files are often in the same buckets as logs and other cache files. I recommend enforcing separation between temporary files and permanent configuration files in buckets. Using a naming scheme that is specific for temporary files can help a file monitoring product like Tripwire to better determine what can and should be filtered out.
Is Your Data Safe in the Cloud?
Overall, cloud storage has been a huge boon to the corporate IT world as well as to cloud storage providers themselves. But handing off your data to another entity doesn’t transfer or erase the responsibility (and liability) that you have for securing that data.
The move to the cloud comes with a lot of new configurations and places where changes can be made – which must be tracked. At the very least, make sure you’re doing CIS or equivalent hardening of your cloud configuration. You should also do a thorough review of all your base configurations around cloud storage and ensure those configurations remain configured correctly. Tracking changes to important files in cloud storage is the next area that must be considered.
Once you have a good handle on how you’re configured, what’s changing and when, only then perhaps can you sleep a little better at night knowing that your data in the cloud is safe and secure.
For a more in-depth look at cloud storage and security, please read this blog post about AWS S3 storage.