The recent Deep Root Analytics incident that exposed sensitive information of 198 million Americans, or almost all registered voters, was yet another reminder of the risks that come with storing data in the cloud. The most alarming part, perhaps, is that this massive leak of 1.1 terabytes of personal data—the “mother lode of all leaks,” as some have described it—could have been easily avoided.
This security incident highlighted the fact that insiders pose as much a threat to an organization as hackers, even though insiders may not be acting maliciously. Enterprises experience an average of 11 insider threats every month, whether from malicious or negligent insiders.
In the Deep Root Analytics’ case, it was simple negligence. The data repository was in an AWS S3 bucket that had its access set to public, so anyone could find it—and download much of it—by navigating to an Amazon subdomain.
The misconfiguration of the S3 bucket is a common mistake. IaaS systems like AWS are often overlooked by organizations, and the Deep Root Analytics leak emphasizes the importance of a strategy that can help avoid this type of costly misstep.
Ensuring proper configuration will also help protect data from outside threats. The AWS platform itself has strong security thanks to extensive investments by Amazon, but even the strongest defenses can be breached by resourceful and persistent bad actors. As we saw last year in the Dyn DDoS attack, a large-scale attack can still overwhelm the sophisticated security protocols of AWS.
Understanding the Shared Responsibility Model
Like most cloud providers, AWS uses a shared responsibility model. It means both the vendor and the customer are responsible for securing the data. The vendor, Amazon, is responsible for the security “of the cloud,” i.e. its infrastructure that includes hosting facilities, hardware and software. Amazon’s responsibility includes protection against intrusion and detecting fraud and abuse.
The customer, in turn, is responsible for the security “in” the cloud, i.e. the organization’s own content, applications using AWS, and identity access management, as well as its internal infrastructure like firewalls and network.
Under this model, Deep Root Analytics was the one liable for the recent data exposure—and the implications will likely linger for a long time.
How to Secure Your Data on the AWS Platform
These best practices can serve as a starting point.
- Enable CloudTrail across all AWS and turn on CloudTrail log validation. Enabling CloudTrail allows logs to be generated, and the API call history provides access to data like resource changes. With CloudTrail log validation on, you can identify any changes to log files after delivery to the S3 bucket.
- Enable CloudTrail S3 buckets access logging. These buckets contain log data that CloudTrail captures. Enabling access logging will allow you to track access and identify potential attempts at unauthorized access.
- Enable flow logging for Virtual Private Cloud (VPC). These flow logs allow you to monitor network traffic that crosses the VPC, alerting you of anomalous activity like unusually high levels of data transfers.
- Provision access to groups or roles using identity and access management (IAM) policies. By attaching the IAM policies to groups or roles instead of individual users, you minimize the risk of unintentionally giving excessive permissions and privileges to a user, as well as make permission-management more efficient.
- Restrict access to the CloudTrail bucket logs and use multifactor authentication for bucket deletion. Unrestricted access, even to administrators, increases the risk of unauthorized access in case of stolen credentials due to a phishing attack. If the AWS account becomes compromised, multifactor authentication will make it more difficult for hackers to hide their trail.
- Encrypt log files at rest. Only users who have permission to access the S3 buckets with the logs should have decryption permission in addition to access to the CloudTrail logs.
- Regularly rotate IAM access keys. Rotating the keys and setting a standard password expiration policy helps prevent access due to a lost or stolen key.
- Restrict access to commonly used ports, such as FTP, MongoDB, MSSQL, SMTP, etc., to required entities only.
- Don’t use access keys with root accounts. Doing so can easily compromise the account and open access to all AWS services in the event of a lost or stolen key. Create role-based accounts instead—and avoid using root user accounts altogether.
- Terminate unused keys and disable inactive users and accounts. Both unused access keys and inactive accounts increase the threat surface and the risk of compromise.
If you’re using custom applications in AWS, you also need to follow best practices for custom application security. Don’t leave any loopholes for bad actors to exploit or for your IT team to overlook. Mistakes like those made by Deep Root Analytics can be prevented—and no organization can afford the implications of not paying attention to their policies and practices.
About the Author: Sekhar Sarukkai is a Co-Founder and the Chief Scientist at Skyhigh Networks, driving future innovations and technologies in cloud security. He brings more than 20 years of experience in enterprise networking, security, and cloud service development.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.