I was just reading an article from Ted Samson on Infoworld’s Cloud Computing blog titled, “Sloppy use of Amazon cloud can expose users to hacking.” One of the points in this article is:
“third parties evidently are not following best security practices when using preconfigured virtual machine images available in Amazon’s public catalog, leaving users and providers open to such risks as unauthorized access, malware infections, and data loss”
This statement is directed at Amazon’s cloud, but this statement can be made about any cloud provider and, truth be told, any on-premise deployment of technology.
I’ve been part of a team studying “security best practices” for over a decade and, from my observations, very few IT organizations really follow best known methods consistently.
Security is hard – or, at least, inconvenient
There are a few challenges that contribute to this problem.
Fast often wins out over good.
A lot of the mis-steps I’ve seen in security are due to people just wanting to get things done quickly. This means taking shortcuts, accepting default settings, and things like that. However, I think the bigger problem with going fast is that planning often gets cut out of the process. Take the time to assess your risks, really understand the technology you’re using, plan your security strategy, and design your implementation with security in mind – you’ll be better off, no matter what – or where – you’re deploying.
Standards? What standards?
Another challenge selecting standards. Which should you use? Which ones truly apply to your situation? The good news is there are a lot of standards to choose from:
- In the case of cloud infrastructure the Cloud Security Alliance has some good guidance;
- When it comes to specific systems, including workloads deployed in the cloud, the Center for Internet Security (CIS) has great prescriptive guidelines for secure configurations;
- SANS has great trainingd and guidelines for securing infrastructure;
The bad news is that it can be challenging to find the right standard, implement it quickly, and integrate it into an ongoing set of practices. It’s the old “knowing vs. doing” challenge.
There is nothing like a pre-flight checklist
One thing you’ll notice in the Infoworld article is its focus on “preconfigured virtual machines.” Preconfigured VM’s are great for making it quick and easy to deploy known and trusted infrastructure quickly. The problem is determining whether the preconfigured VM can be trusted.
With any preconfigured infrastructure, it is a good idea to ensure that you understand the security of the preconfigured component before you deploy it. Automated configuration validation is a great way to do this. I suggest integrating an initial assessment (ideally before you check the “template” into your virtual machine library), as well as a periodic review of the security profile of the items in the library.
This approach can help you ensure that the templates themselves keep up with your security requirements of your business, and that you understand any inherent risks in previously deployed infrastructure derived from older versions of the templates.
You can’t outsource accountability
Another issue I’ve seen is that enterprises often put too much trust in their vendors, hosting providers, or other third parties. It is often easy to sign a contract, trust that the provider knows what they are doing, and hope you don’t have an issue. As the old saying tells us: Trust is not a control, and hope is not a strategy.
This is an area where being specific about your standards and expectations is critical. If you have standards, ensure that the 3rd party’s practices align with them. Insist on ongoing proof that your standards are being upheld by the provider (in other words, trust but verify). If you read the fine print, the provider likely isn’t on the hook for a lot of the bad things that could happen, so it’s up to you as a customer to insist they are taking due care of your systems and data.
Security is not a point in time
Another critical failing I’ve seen, even among savvy security professionals, is focusing on security at a point in time but allowing configuration drift and infrastructure changes to move them into an insecure state.
You’re probably familiar with the phenomenon – at the beginning of a project…or just before an audit…or just after a major security incident…you spend a lot of time getting your security house in order. Things are good. Then, suddenly, other distractions and priorities encourage you to take your eye off the ball. One day, you wake up and you’ve got a mess on your hands, and it happened without you realizing it.
The best way to combat this is to implement practices (monitoring, sign-offs, documentation, and a lot of the other mundane work) to help you bolt strong security practices into your daily operations. This is an area where standards, configuration baselines, and automated monitoring can help tremendously.
If you know what “secure” looks like, you can easily manage by exceptions by monitoring for and alerting on things that don’t match your definition of “secure.” That means you don’t have to wait for the next audit or catastrophic event before you return your systems to a secure state.
The bottom line on cloud security
The bottom line is that “the cloud” is not to blame for the issues cited in the Infoworld article – weak security practices and the lack of process rigor are. Sure, the cloud can amplify bad practices, but it isn’t significantly more dangerous to run infrastructure in the cloud than it is to run on local infrastructure. Poor security practices can screw up any infrastructure, no matter how advanced.
My advice is to become an informed practitioner, anchor your practices to objective security and configuration standards, leverage automation (like security configuration management, continuous configuration monitoring, and other methods to create hardened systems) and ensure that you have ongoing visibility into how your systems compare to those objective standards.
The more work you do up front to develop the right security strategy for your business, the better. The more you do to automate the ongoing monitoring of the state of your infrastructure, the easier it will be to wire it into daily operations where you can manage by exception. By doing those things, you have a shot at maintaining effective security over the long run.
What about you – any thoughts or advice from your experience in the trenches?