It’s January again, and as usual, various media outlets are busy reporting on vulnerability statistics from the previous year. As usual, the CVE Details folks have worked up a lot of hype based on CVE counts, and once again, the media has taken the bait with sensational headlines about Google’s Android being the most vulnerable product of 2016. For context, last year this title was given to OS X, and in 2014, it was Internet Explorer.
While these headlines may have made various factions of fanboys smile year after year, the lessertold story is that these statistics are essentially meaningless and say nothing about the relative security between products. In this post, I will attempt to spell out some of the reasons for this, but I definitely encourage anyone who is interested to check out Steve Cristey and Brian Martin’s 2013 Black Hat talk “Buying into the Bias: Why Vulnerability Statistics Suck” for a far more comprehensive explanation.
Even if we start with the assumption that vulnerability counting is a direct indicator of product security, it is important to recognize that there is not nor will there ever be a completely comprehensive index of security defects. For starters, CVE and other vulnerability databases can only catalog vulnerabilities they know about. This knowledge comes from vendor advisories as well as direct submissions from researchers.
Unfortunately, many researchers do not publish all of their findings and vendors commonly avoid public disclosure whenever possible. (This is referred to in the scientific community as publication bias.) Undiscovered vulnerabilities and vulnerabilities identified with the intention of malicious hacking are also excluded from any vulnerability count until such time that they are publicly disclosed. These factors tend to inflate CVE counts for vendors with policies of being more transparent regarding vulnerability disclosure and for software that is more actively scrutinized for weakness.
CVE--and specifically CVE Details--is a very bad source for generating statistics on vulnerabilities that affect various products due to the nature of how CVEs are created as well as the limited resources of both MITRE (responsible for the CVE project) and CVE Details. One such problem is that components or libraries are often shared across products, but CVE Details does not generally associate these vulnerabilities with all products using the vulnerable code.
Taking WebKit as an example, it was very common to find that vulnerabilities found while auditing Chrome (back when it used WebKit) would also affect Apple’s Safari browser. Despite having a link to an Apple advisory, indexes like CVE Details typically fail to associate the entry with Safari
is an example where Apple’s advisory is associated with the CVE entry yet CVE details does not bind this CVE to any versions of Safari. This would have inflated the Chrome count for 2013 and deflated that year’s count for Safari.
As another example, some software like OpenSSL or the Linux kernel is just so widely used that it is quite literally impossible to enumerate all of the products affected by vulnerabilities in these components. Instead these vulnerabilities tend to get associated with the product where the issue was initially discovered.
Another problem with CVE is that especially in recent years, MITRE has become inundated with submissions leading to excessive delays
and the eventual shutdown of the cve-assign email address
in 2016. CVE entries are also not automatically published and in fact will not be published until certain requirements are met such as an associated advisory URL or blog post on the issue.
In my personal experiences working with MITRE, I’d find huge discrepancies in response times where some submissions would be processed in a matter of hours while others would take weeks or even months if MITRE ever responded. There also didn’t seem to be any rhyme or reason regarding bug criticality or software popularity to explain the response time. I personally have quite a few CVE submissions going back to late 2015 that were never processed due to the backlog. Also, hundreds of my CVE assignments were never published by MITRE because the vendors did not publicly acknowledge the issue, and I have neither the time nor the inclination to write a blog post detailing each and every one.
All of these factors contribute to sampling bias against vendors that deal with security issues in a responsible way and result in hugely inflated counts of vulnerabilities in open-source projects and products that offer bug bounties. Open-source makes it considerably easier to audit for security defects using static analysis and instrumented fuzzing tools like American Fuzzy Lop
. (This ties into selection bias as researchers may prefer analyzing open source software or products where there is a monetary incentive.)
Open-source software is also naturally more transparent about security flaws due to code changes generally being available for inspection and less concern from coming researchers about legal retribution. The bug bounty factor also plays a tremendous role not only because this provides motivation for researchers to scrutinize the products and find more bugs but also because, as stated earlier, vulnerabilities in common components tend to only get associated with the product where the bug was initially reported.
These conditions combine to give the possible illusion that software like Android, which is heavily open source and has a bounty program, is more vulnerable than other software. Additionally, due to the open nature of Android, there are many customizations made by handset manufacturers to support specific hardware or provide bran- specific features. Vulnerabilities in these components that may or may not ever be included with the official Android Open Source Project are still included in the Android counts from CVE details.
For example, a series of vulnerabilities found in Samsung specific code for their Galaxy line of smartphones
are all counted as Android issues even though they did not affect any Android devices besides the Galaxy phones. On the other hand, some Linux kernel vulnerabilities that could be exploited from Android (such as CVE-2016-7917
) are associated in CVE details with the Linux kernel but are not associated with Android or any specific Linux distributions.
I could certainly continue on arguing the case of how absurd it is to try and measure product vulnerability based on raw CVE counts, but I think the point has already been made
. There is another far more pressing question to consider. Is there actually a way, given two reasonably complex software packages, to determine that one is more or less secure than another? Is there a way to even empirically and conclusively measure how vulnerable a system is? This is in fact an incredibly complicated question and one for which I do not claim to have an answer. While infosec is commonly considered a branch of computer science, there is a strong argument that science is largely absent from the practice of infosec