The Internet as we know it is only possible thanks to cryptography and specifically TLS (formerly known as SSL). Without this crucial technology providing a means for private online communications, e-commerce would quite simply not be a thing, and the Internet would likely be little more than a world-wide party line for sharing bad jokes.
Despite its critical nature, there is a long history of ignoring and even introducing weakness into the SSL/TLS stack. In the case of export ciphers, government regulation allowed FREAK and various downgrade attacks to be possible, but there is a long list of examples in which research was seemingly ignored until practical attacks surfaced. Last month at AusCERT, researcher Hanno Böck’s presentation summed this up quite nicely with a single slide:
The above slide is a list of published research describing weaknesses identified in SSL/TLS, along with attacks published years later that could have been avoided by following mitigations described in the research. As we see in these examples, problems were known in the community for an average of around 12 years, yet mitigations were not implemented during that time. Several of these attacks have triggered panic and fire drill-like SSL upgrades, which some in the industry have claimed led to further insecurity due to misconfigurations by rushed admins.
This lag in remediating weaknesses leaves those who rely on cryptography exposed to attack. In the case of the deprecated MD5 hashing algorithm, research between 1993 and 1996 demonstrated the possibility of collisions within MD5 compression leading cryptographers to consider MD5 as effectively broken with respect to collision resistance.
About a decade later, in 2004, university researchers demonstrated the ability to produce two inputs with identical hashes in under an hour using an IBM p690 cluster. Cryptographers subsequently refined those attacks to produce X.509 certificates with colliding hashes, and the process even became possible on a simple laptop.
CERT/CC sounded the final death knell with VU#836068 titled simply “MD5 vulnerable to collision attacks.” This publication stated quite directly what the crypto community had known for many years: MD5 is cryptographically broken and should not be used for security applications.
Several years later, the world learned of a sophisticated malware known as Flame. Despite years of warnings, real-world systems still hadn’t moved away entirely from MD5. The authors of Flame used that to their advantage by creating a counterfeit certificate, which allowed their malware to be trusted by target systems.
In addition to using algorithms with well-known flaws, there is a history of introducing weaknesses in the name of interoperability with buggy software. For example, early on in the development of OpenSSL, the developers became aware that Netscape Enterprise v2.01 had a bug related to session resumption. Specifically, an SSLv2/v3 compatibility mode connection could subsequently be resumed with a different SSLv3 specific ciphersuite.
The workaround option ‘SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG’ was then added to the code in order to avoid compatibility issues. This option (which is commonly enabled as part of ‘SSL_OP_ALL’) was later revealed as a weakness because a malicious client could manipulate session cache to downgrade the cipher used on a resumed connection. This vulnerability, combined with servers offering export grade ciphers, meant that many users were left exposed to attack for over a decade simply because of a decision to work with broken implementations.
Many attacks have actually been enabled thanks to conscious decisions to maintain compatibility with broken SSL/TLS stacks. Unfortunately, many implementations suffer from what we call ‘version intolerance.’ When a client initiates a TLS connection with a server, the first message sent (Client Hello) advertises the maximum version of the TLS protocol supported by that client. SSLv3, for example, is denoted as 0x0300, while TLSv1.0 is 0x0301 up to the latest standard TLSv1.2, which is 0x0303.
It is expected that the server will respond with a Server Hello indicating the highest mutually supported protocol specification. If the server perceives a problem with the handshake (i.e. unsupported extensions, no matching ciphers, etc.), it is supposed to send a TLS alert to the client. However, when researchers performed scans of TLS services exposed on the Internet, they found that out of 11.2 million hosts that responded positively to older protocol revisions, 17% of them sent improper responses to a TLSv1.2 Client Hello.
Breaking this down further revealed a whopping 21% of servers with trusted certificates sent invalid responses indicating that this behavior is not limited to misconfigured or unmaintained services. (This research was published in the ACSAC ’12 proceedings under the title, “One Year of SSL Internet Measurement”.) This version intolerance led all of the major browsers to implement what has been labeled the “fallback dance.”
In an effort to avoid upsetting users visiting broken sites, browsers have been designed to make successive attempts to establish connections with lower and lower security parameters before giving up. In a nutshell, this means that if a server doesn’t respond to a TLSv1.2 hello, the browser will then try again with TLSv1.1 and then TLSv1.0 and so on.
For an attacker with man-in-the-middle capabilities (something TLS is expected to be safe against), all they have to do is drop negotiation attempts for newer protocols until the browser attempts to negotiate a session the attacker can break. Browser vendors have responded to this problem by prohibiting dangerous fallback, but it is likely that we will once again see problems crop up as TLSv1.3 standards are confirmed.
Moving forward, enterprises, as well as consumers, need to apply pressure on vendors to avoid complacency regarding their TLS stacks. In the case of TLS, there’s a big difference between working and working properly. Unfortunately, these workarounds and failures to implement mitigations do not exist in a vacuum, and they are often a direct result of friction against change. My hope, however, is that by discussing these issues, more people will become advocates of using well-tested and standardized TLS implementations.