Skip to content ↓ | Skip to navigation ↓

UPDATE 2/24/17, 4:30 PM PST: Researcher Hanno Böck (@hanno) has confirmed that leaked CloudFlare data was not entirely purged from multiple search engine caches ahead of the public disclosure.


In April 2014, the security community was shocked with the revelation that a poorly implemented TLS extension in OpenSSL could allow attackers to easily disclose private memory contents from an astonishing number of HTTPS sites. This bug, of course, is CVE-2014-0160 but it is better known by its brand name “Heartbleed.”

This bug was cleverly called “Heartbleed” because of how it abused the TLS heartbeat extension to cause affected TLS stacks to ‘bleed’ contents of heap memory into responses. Fast-forward to February 2017 and we now have two new HTTPS bleeding disorders with CloudFlare’s Filippo Valsorda disclosing Ticketbleed in F5 load-balancers followed by Google’s Tavis Ormandy’s revelation of a memory disclosure bug in CloudFlare’s reverse-proxy systems.

Having a little fun, Tavis noted in the Google issue tracker that he was resisting calling this flaw “cloudbleed” but the Internet had different plans.

All joking aside, though, this is an alarming trend with an objective impact on the threat models we consider when deciding how to handle data in general. Each of these issues represents a breakdown at a different point within the HTTPS ecosystem leading to the same end-result, disclosure of private data.

While TLSv1.2 has not been (publicly) broken, all three of these vulnerabilities revealed data that was protected by HTTPS. This includes passwords, session tokens and specific personal data, such as private messages, financial details, and data collected by IoT devices, such as FitBit.

Although TLSv1.2 with a strong ciphersuite should still be considered secure, it is important to recognize that there will always be bugs and there is always the possibility of data transmitted over HTTPS being disclosed to a third-party such as an intelligence agency, a search engine, or a malicious hacker.

In my opinion, the best way to deal with this threat is to make disclosure of the data as minimally detrimental as possible. This means different things in different contexts, and it is not entirely clear how to do this in all circumstances but there are a few easy examples. Disclosure of passwords, for example, can be made irrelevant by employing multi-factor authentication schemes across the board.

The risk from a leaked session token can be minimized by making them short-lived and by adding additional checks to authorization schemes. A very simple example of this would be to force clients to re-authenticate when a session token is received from an unexpected IP address or region. A more sophisticated solution would be for a service to use cryptographic nonce values in place of static session IDs.

This threat model also puts increased importance on alternative payment systems that use multi-factor authentication and tokenized transactions rather than directly entering payment card details (i.e. using services like PayPal, Amazon Payments, Google Wallet, etc.).

The other big factor is to minimize what data you decide to create. There is an old adage that you shouldn’t put anything in an email that you wouldn’t want to be printed on a billboard in Times Square. This is genuinely good advice for all situations (i.e. IRL, as well as online) but clearly, there are some situations where it can hardly be avoided. The goal in these situations is to use designs that limit potential exposure points for the data through whatever means possible.

With both Ticketbleed and Cloudbleed, for example, data was not being exposed at either of the endpoints but rather by systems involved in relaying and distributing the data (e.g. an F5 load balancer or a CloudFlare reverse-proxy respectively). Sensitive data that had been encapsulated with another layer of encryption would still likely be secure in both scenarios thus affirming the value of true end-to-end encryption. Terminating TLS connections any place other than the client or the server introduces additional risks as was recently documented and reported as part of the NDSS Symposium being held the last week of February 2017.

While these strategies can help users, as well as service providers, limit exposure, there is a more immediate question about what to do now in response to Cloudbleed just as there was in the aftermath of Heartbleed. Back in 2014, there was a lot of conflicting advice as to whether users should change passwords, request new credit card numbers, or take various other steps in the days, weeks and months following the Heartbleed disclosure.

On one hand, many people were urging immediate password changes while others (including myself) argued that it would be a mistake to do so before confirming that patches had been deployed. After all, in that situation, it was up to each website operator to determine whether they were affected and roll out fixes as needed. Additionally, the simplicity of the exploit guaranteed that attackers were leaking all the data they could from affected systems.

Cloudbleed is much different because it involves the systems of a single service provider and there are assurances that everything was fixed before the public knew about the problem. What’s also different in this case is that any number of web crawlers could have potentially archived leaked data without realizing it, whereas Heartbleed required intentional attacks. In the case of Heartbleed, there was no public evidence that any exploitation had occurred anywhere before the disclosure; with Cloudbleed, the issue was effectively under active exploitation by web crawlers, as well as individual web browsers.

While CloudFlare and Google worked hard to purge any private data from search engine caches, due to the short timing between when the fix was deployed and when the problem was announced, it is possible (and in fact there are already some unverified claims) that data was missed and could still reside in caches for a few more weeks or even longer. (This begs the question of whether it was prudent for Google to push so hard for ASAP disclosure.)

I think it is extremely unlikely that efforts to scrub this data from caches around the world would go completely unnoticed by all intelligence agencies and criminal organizations. While I don’t expect to see this data going up for sale on the black market, it would not surprise me at all if we learn in the future that a well-resourced group did in fact extract and organize data leaked from CloudFlare.

In light of this, now is probably a good time to cycle passwords on any sites you would want to keep private. Due to the ubiquity of CloudFlare, it is easier to just assume that every site you use was potentially affected in some way. (This is, of course, somewhat less important if you are already using multi-factor authentication.) It is also a good time to consider whether to discontinue use of services not offering multi-factor authentication.