News travels fast in the information security community, and the combination of politics, cloud and cybersecurity make for a rapidly moving headline. You’ve no doubt read about the disclosure of 198 million records by a political data analytics firm by now. This isn’t a case of malicious hacking, but of misconfiguration. These records were simply left unprotected in Amazon S3.
To be clear, a lot of this information is publicly available but the assembly of the data into one source, correlated with additional information, makes this a treasure trove of personal data. It’s important to understand that this incident didn’t start with the discovery that the data was unsecured; it started at the moment the data was unsecured, regardless of who knew of that fact.
Those affected should be primarily concerned with who may have accessed this data before the misconfiguration was known. With data, theft doesn’t require that the original source disappears. It could be stolen over and over again without anyone noticing.
The immediate concern that’s likely to dominate analysis is identity theft. Anytime a collection of personal data is disclosed, the constituent data points may be used for a variety of nefarious activities.
Even a list of valid email addresses has value for targeted SPAM. In this case, the data doesn’t contain email addresses, but much more, including first and last name, home address, party affiliation, ethnicity and date of birth.
This data can be used as part of a variety of scams that lead to personal loss for the individuals involved. And the risk doesn’t end with discovery of the disclosure. This risk extends for years.
More insidious, however, is the potential for a malicious nation state to use this data. The political parties use data like this to win elections, and we know that there are malicious nation states that would like to be able to manipulate the outcomes of elections in the United States.
If this data has been downloaded by groups with the interest and means, it would dramatically improve the effectiveness of any efforts to manipulate outcomes in the future. Again, this is a risk that doesn’t disappear now that we’ve discovered and closed the hole. It’s another risk that will last for years to come.
This incident should be a wake-up call for any organization that collects personal data, especially those using the cloud for storage or processing. It’s easy to look at cloud-based services as a means of shifting costs and services, but you can’t outsource responsibility for securing data.
While it’s tempting to start looking for shiny, new solutions when an incident like this occurs, the fact is that most of the risk organizations experience can be mitigated by applying security basics consistently and comprehensively. The cloud doesn’t change the equation here, though it may change some of the implementation.
It may be trite, but you can’t protect what you don’t know about. Starting with asset discovery is important. While ‘asset’ used to mean a physical server plugged into the network, we need to broaden our definition of asset to keep up with technology.
Asset discovery should still include servers, workstations, network devices and other traditional assets, but it must also be expanded to include databases, containers, cloud instances and infrastructure. The information about what’s out there is the basis for what you protect.
Secondly, it’s vitally important to start with a secure configuration. This includes not only basic security settings but also compliance to policies. While disclosure of data is certainly impactful, fines from policy violations can make an incident worse or become an incident in and of themselves.
To start with secure configurations, you have to be able to specify what they are for all the assets in your environment and measure each asset against those standards. This may not be exciting, but that doesn’t make it easy to do well.
No environment is static. Change is a constant. Even if you start out securely, you’re likely to end up with insecure configurations or policy violations through the operational life of your assets.
Monitoring for change is the appropriate bookend to secure configuration management. If you can detect changes as they occur, you can remediate them more quickly. The longer an insecure configuration exists in your environment, the longer it presents a risk to your organization.
Change occurs outside your environment, as well. Even if you keep a system from changing, new threats and vulnerabilities may emerge that affect your risk posture. That’s why it’s imperative to consistently monitor your environment for external change, i.e. vulnerability management.
Identifying, prioritizing and remediating vulnerabilities is a basic security best practice. Moving systems to the cloud doesn’t magically prevent them from being vulnerable, and the cloud infrastructure provider won’t magically fix vulnerabilities in your applications and software.
While it’s ideal to prevent every incident, it’s not generally possible. In the case where an incident does occur, or where you suspect something malicious may have happened, you need the log data necessary to investigate fully. Log events can also be used to proactively alert organizations to activity that’s in progress.