One of my tasks here at Tripwire is to capture, understand and track security issues in our software products. Generally, I think of this as a kind of “technical debt” called “security debt.” Like any kind of debt, the first step to managing and reducing it is identifying it. In my mind, this is something that is essential for a company producing security products. Our products should only enhance your security profile, not add to your headaches.
Recently, I was tasked with taking over one of our ongoing efforts – identifying security issues in our third-party libraries (OWASP Top 10 2013 A9 – Using Components with Known Vulnerabilities). Over my 11+ years at Tripwire, this process has had its ups and downs, primarily due to the difficulty in gathering the info. Once we know what the issues are, they can be prioritized and dealt with, but getting to the point of knowing what is going on can be a challenge.
Do you monitor each and every library’s bug tracking system for defects that might be security related? Subscribe to every mailing list you can? Query the NVD and/or other vulnerability databases? Pay a third-party to provide you the data?
I think we have tried almost everything. Years ago, we settled on using the NVD as the most practical solution for our organization, augmented by monitoring critical infrastructure bits (think OpenSSL) with the other techniques.
When the product was new and much smaller, we were able to understand our dependencies and monitor libraries manually. Over time, the product and build system grew in complexity, making the identification of dependencies harder. The process was always time-consuming and cumbersome. Because of the fluidity of a development project, it always happened close to the end of a release when it was hardest to adapt to any found issues. And all of that was just the gathering of data – the analysis still had to be done and that turned out to be the easiest part of the old process.
Being a veteran programmer (i.e. lazy), I did it once and decided that our process was too much work. So, with a lot of research (it certainly took more than a single web search), I put together a list of available options, most of which were commercial solutions.
What caught my eye immediately was the OWASP project Dependency-Check. After reading the documentation, I was sold on giving it a try first. For one thing, I wouldn’t have to talk to a sales guy! Secondly, it was free. Thirdly, it was a mature OWASP project with an active development community. Actually, the third item was most important to me as a professional; the others are more about my personality.
What I learned from trying it was that it would be an excellent tool for quickly gathering a list of dependencies with CVEs and reporting on them. It identified all but two issues we knew about when it was run against the previous release. Those issues had been identified because we knew that they were there due to a dependency on a shared library that was not shipped with the product. This told me not to rely 100% on the tool. I could now spend the majority of my time analyzing the issues and managing them instead of finding them. FTW.
Feeling confident that this would be a suitable replacement for homegrown scripts, manual processes and inherited knowledge, I began to explore some of the additional features of the toolset. I discovered that there is a Jenkins plug-in that combines the core tool with some nice reporting and good visibility into the data. After setting up the job and tuning the system to suppress false positives, the most important benefit of all was realized – a daily job that could monitor and report on CVEs in our third-party libraries.
Now, we have this tool in our suite of continuous integration jobs we run. We can react to new issues long before a construction complete date. I also have more time to perform the analysis of each issue as it is discovered rather than trying to do it at the last minute. Also, I am no longer as worried about needing to hold up a release of the product due to a critical issue being discovered late in the process.
Furthermore, I was able to integrate the tool into our build system and give development teams the ability to run it as a build target on demand. Although not as nice in terms of reporting, teams will be able to identify CVEs on their own branches and make more informed decisions when adding or upgrading third-party libraries. In addition to the command line interface and Jenkins plug-in, there are plugins for the ant, maven and gradle build tools.
However, there are some caveats to using the Dependency-Check tool. It primarily supports Java and .NET, with a number of experimental analyzers available for other languages. The main difference between the main analyzers and the experimental ones is that the development team’s confidence in the experimental analyzers isn’t yet up to the fairly high standards they have set for themselves.
You do need to invest the time to understand how it works, examine the data it reports and manage both false positives and false negatives. It relies on the NVD as its source of data, so you should be aware that not everything ends up recorded as a CVE. I strongly recommend that you consider augmenting this approach with another form of monitoring critical security libraries.
I think Dependency-Check is a great addition to our process for identifying and managing risk introduced by known vulnerabilities in third-party libraries. It has allowed me to establish daily monitoring of a product for CVEs to get early warning as they are identified and more time to respond to any new issues.
It also gave me back some of my time, so that I could focus on the analysis and not the data-gathering. I would love to spend some of that time answering any questions or hearing about your experiences managing risk introduced by 3rd party libraries so please comment or email me to carry on the conversation.