Amazon’s new security issue, which came to light just days before one of its biggest sale events of the year, is making recent headlines. And whilst it probably won’t stop the online retail giant from achieving a profitable Black Friday and Cyber Monday this year, it certainly will make many users stop and think.
Though it’s still early in the disclosure period, Amazon stated that the incident did not amount to a “breach of its website or any of its systems, but a technical issue that inadvertently posted customer names and email addresses to its website.”
It’s quite possible that this statement will be all that the general public ever gets to know about the incident. Even so, I imagine teams at Amazon were no doubt sitting down in meeting rooms for long conversations about the lessons that are to be learned and how the company can avoid similar headlines.
When I’ve sat in on such meetings in the past, a lot of the conversations for these issues stem not from an outsider threat but from a risk much closer to home – a simple misconfiguration. Whether it was an inadvertent firewall rule or a single typo leaving an access setting left open, human error still remains one of the top risks to an IT organisation’s security.
With Tripwire Enterprise, there are implementations configured to monitor your IT infrastructure for change (including Amazon Web Services). These implementations can make a wealth of data immediately available to all parts of the IT organization. The organization can then use that data to manage both the expected changes and unexpected defects that can detect a breach.
But collecting the data alone doesn’t necessarily facilitate the positive changes needed to reduce these risks. Properly managing the data collected and identifying ways to quickly turn it into targeted and productive information requires a strategy.
When I work with clients, I spend a lot of time talking about various approaches to build these strategies up and the steps that will allow Tripwire to be a source of actionable information for resolving outages quickly in Operations, expanding forensic sources for Security, providing compliance reporting for Governance, as well as delivering management a perspective on the volume of change occurring across the IT infrastructure.
One tool, with the right configuration, can be a source of truly critical data, and with the right team and processes (both human and system based), it can turn this data into information.
Change management doesn’t have to be a “big bang project” – sometimes even the little steps towards improving change process can make a big difference, too. If you start by identifying high-risk business elements, such as your core business application’s configuration files or membership of key security groups, a small scope can be a sensible but powerful place to start.
In traditional and small environments, these items typically don’t change very often but DevOps practices and larger organizations may find even these small configuration elements may change regularly, so the next step is to classify the good changes and the bad changes appropriately.
That could be as simple as identifying trusted authorities who might make a change or working out the change windows when you’d expect changes to occur.
By starting with these processes you can not only achieve a significant benefit towards managing your change systems, but it’ll give you a chance to understand early off how well change control processes are followed within your organization. And even with a small data set it’s possible to highlight the amount of change that your teams manage and control.
Potentially tiny changes, with large cloud infrastructures, have potentially very high impact results.
The good news is that potentially small investments in time to provide monitoring to verify changes can help prevent your systems experiencing issues similar to the ones faced by others who might not invest as wisely in robust change management tools to support their change processes.