Image

Overview
Below is an overview of the 2020 CWE Top 25 list.Image

Limitations
The CWE Top 25 is not perfect. It is still subjected to several limitations to the data-driven approach.Data Bias
- CWE sources data from NVD which doesn’t cover all vulnerabilities. There are numerous vulnerabilities that have not yet been given a CVE ID, and therefore are excluded from the approach. An example would be if a vulnerability was found and fixed before being publicly disclosed.
- The utilization of CVSS scores from NVD is flawed for several reasons. NVD Analysts have historically had varied views of scoring that has led to different scores for similar vulnerabilities. Additionally, CVSS scores a vulnerability, not the projected severity of exploitation as the CWE Top 25 methodology would have you think. Finally, vendors often release their own information regarding CVSS scoring that, due to their intimate knowledge of the product, is more accurate than the NVD analysts score.
- Vendor’s who report CVE entries to NVD sometimes lack important details and information on the vulnerability itself, and instead describe just the impact of the vulnerability. This leads to insufficient information in determining the underlying weaknesses.
- The dataset used by NVD shows inherit bias based on the set of vendors that report vulnerabilities and the programming languages used by those venders. An example would be if one of the larger vendors contributing to NVD used primarily C for their programming language, weaknesses that often exist in C programs would be more likely to appear.
Metric Bias
- CWE draws attention to an important bias related to the metric and that it, “indirectly prioritizes implementation flaws over design flaws, due to their prevalence within individual software packages.” For example, a web application may have many different code-injection vulnerabilities due to the large attack surface, but only one instance of use of an insecure configuration for input validation.