Image

WHAT IS AWS S3?
AWS S3, otherwise known as the Simple Storage Service, lets you store arbitrary objects inside of buckets. URLs can be used to retrieve those objects assuming you have the appropriate permissions do so. For example, you could store a simple Javascript file in an S3 bucket that you expose for anonymous read access, which you can then include in a script tag from a public static webpage using a URL like https://s3-us-west-2.amazonaws.com/mypublicbucket/my_public_file.js There are two primary methods for managing external access to your S3 objects. The first uses what is known in AWS S3 nomenclature as ACLs, which is by far the more basic of the two methods I’ll discuss here. Through ACLs, you can grant basic read/write permissions to other AWS accounts or predefined S3 groups. If you’re granting access to AWS accounts, you of course want to be sure to audit those accounts and their levels of access to ensure the principle of least privilege is being adhered to. For the purposes of this discussion though, what we’re most concerned with is making sure we haven’t exposed our objects to anonymous access. To that end, you want to make sure you haven’t granted read and/or write permissions to either the “AuthenticatedUsers” or “AllUsers” predefined global groups. Any read and/or write permission to these groups in an ACL should immediately raise a red flag. Granting either of these predefined groups permissions to your objects should only be done when absolutely necessary, and you should always place a high priority on evaluating any change in access that affects these groups. While it may be obvious why you should be careful with granting access to a group called “AllUsers,” you may have raised an eyebrow at “AuthenticatedUsers.” Why would you need to be concerned about that? Because it’s any authenticated AWS users, not just those associated with your own account. Anyone could go and register a new AWS account and use those credentials to make requests to your files as an authenticated AWS user if your files have been granted permissions to this group. For all intents and purposes, it may as well be anonymous read access only with an added false sense of security if you don’t understand just how many users you would be exposing your buckets and objects to. So how can you manually tell if you’ve exposed S3 objects via an ACL? Clicking through the AWS console is one way. If you navigate to an S3 bucket or object in the AWS console and click the permissions tab, you can determine whether or not an object has been exposed by looking for permissions for the “Everyone” or “Any AWS User” groups under the Public Access section. Amazon will also usually helpfully place yellow “Public” markers in the UI if it has detected that you’ve exposed a bucket or file publicly. That big yellow “Public” marker should be your red flag to go assess those Buckets and/or Objects.Image

aws s3api list-buckets aws s3api get-bucket-acl --bucket bucket-name aws s3api list-objects --bucket bucket-name aws s3api get-object-acl --bucket bucket-name --key file-nameThe second method for managing access to your S3 objects is using Bucket or IAM User Policies. It’s far more complicated than using ACLs, and surprise, offers you yet more flexibility. Bucket policy and user policy are access policy options for granting permissions to S3 resources using a JSON-based access policy language. In policies, you can grant access to resources to specific actions for specific principals. Resources will be your buckets and objects. Actions will be the set of operations permitted on those resources. Principals will be the accounts or users who are allowed access to those actions and resources. Where things start getting really complicated is that you can provide wildcards for your resources, actions and principals. An example S3 Bucket policy that would expose all of the files in your bucket is the following:
{ "Version":"2012-10-17", "Statement":[ { "Sid":"AddPerm", "Effect":"Allow", "Principal": "*", "Action":["s3:GetObject"], "Resource":["arn:aws:s3:::examplebucket/*"] } ] }
Image

Image

aws s3api get-bucket-policy --bucket bucket-nameWith all these permutations of policies and ACLs, it starts getting very complicated to determine whether or not a file is actually exposed on the Internet when it shouldn’t be. In fact, I might suggest that your policies could be so complicated that probing your files directly by sending HEAD requests to your bucket and object URLs may be a prudent step on top of your manual assessment to any changes in Policies and ACLs. Probing with HEAD requests would be a fairly definitive test for checking that you haven’t exposed your objects to the masses. If you get a 200 response to your HEAD request, you can be sure your file is exposed.
# curl -I http://mybucket.s3.amazonaws.com/public.txt HTTP/1.1 200 OK x-amz-id-2: 74p+ISJgFK+OBr0X0hNT14+jTRMjerF6v8FSPM/EKvfweLFv8dqLa20MSvFPPHSxGf+ppDVdH5Y= x-amz-request-id: 42387922120B2B55 Date: Mon, 11 Dec 2017 22:33:45 GMT Last-Modified: Mon, 11 Dec 2017 18:46:06 GMT ETag: "0b26e313ed4a7ca6904b0e9369e5b957" Accept-Ranges: bytes Content-Type: text/plain Content-Length: 19 Server: AmazonS3 # curl -I http://mybucket.s3.amazonaws.com/private.txt HTTP/1.1 403 Forbidden x-amz-request-id: CF33CE1511EF1811 x-amz-id-2: y1mBWlGZllczQ77p3XuIm0a0G+nVc7pPTxq63mMAX5gLLExbddtS80arxMKyrWSi9czlKFdKBc0= Content-Type: application/xml Transfer-Encoding: chunked Date: Mon, 11 Dec 2017 22:34:26 GMT Server: AmazonS3Of course, these are unauthenticated requests, and your files may instead be exposed to all authenticated AWS users via the “Any AWS user” group. I should reiterate that this group includes any authenticated AWS users in the world, not just those users associated with your own AWS account. You could instead use a library like python-requests-aws for making authenticated HEAD requests to your S3 buckets and objects to identify if they’re exposed. You can find python-requests-aws here. Using this library, you could use code like the following to make an authenticated HEAD request to an object in one of your buckets.
url = ‘http://mybucket.s3.amazonaws.com/public.txt’ print requests.head(url, auth=S3Auth(ACCESS_KEY, SECRET_KEY)).ok
How Tripwire Can Help
Using the Tripwire Enterprise Cloud Management Assessor, we can automatically assess your AWS S3 Buckets and Objects to determine if they are exposed for anonymous access and to report on objects that have become newly exposed. It can help you with your Microsoft Azure Storage, too, but I’ll get into that in a different post. The Cloud Management Assessor will scan each of the buckets and objects you have stored in S3 to retrieve metadata, file contents, ACL, and Policy information as well as track all of that for change.Image

Image

Image

Image
