CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE Top 25 > 2024 CWE Top 25 Methodology  
ID

2024 CWE Top 25 Methodology

The “2024 CWE Top 25 Most Dangerous Software Weaknesses” list was calculated by analyzing public vulnerability information in Common Vulnerabilities and Exposures (CVE®) Records for CWE root cause mappings.

This year’s dataset included 31,770 CVE Records for vulnerabilities published between June 1, 2023 and June 1, 2024. Data was initially pulled on July 30, 2024, to share with CNA community partners for review. Data was pulled again on November 4, 2024, to ensure the most up to date CVE Records information was used in the Top 25 list calculations.

Dataset Collection/Scoping

The initial Top 25 dataset comprised all CVE-2023-* and CVE-2024-* published between June 1, 2023, and June 1, 2024. The CVE Records were analyzed via automated scanning to identify those that would benefit from re-mapping analysis. These included CVE Records with CWE mappings that:

  1. Were too high-level
    • A common mistake made when mapping a vulnerability to a CWE is choosing a high-level entry that is not actionable or precise enough. CWEs at the Base and Variant level ensure adequate specificity, actionability, and root cause information for a vulnerability.

  2. Differed from a mapping found using an internal keyword matcher
    • Over years of Top 25 analysis, the CWE Team has identified a list of keywords found in CVE descriptions that commonly indicate a specific root cause weakness. Although using keywords to identify root cause weaknesses can be flawed, it is a good starting point to identify mappings that could be incorrect. If the keyword matcher found a different mapping than was present, the CVE Record was kept in the dataset for remapping analysis.

Ultimately, the dataset identified for re-mapping analysis — the “scoped” dataset — contained 9,900 CVE Records (31% of all records in the dataset) originally published by 247 different CNAs.

CNA Expert Collaboration

The “scoped” dataset was divided into batches of CVEs based on the CVE Numbering Authority (CNA) who published them, typically with one batch for CVE Records mapped to “high-level” CWEs, and a separate batch for CVE Records with differences based on the internal keyword matcher. On August 20, 2024, the Top 25 team emailed each CNA to review their batch(es) to determine if a more specific or better suited mapping was appropriate. These emails included the list of keyword matches and potential mappings for consideration, as well as mappings that were identified to be too high-level. Engaging CNAs in this way leverages their expert knowledge of the products and access to information that might not be present in the CVE Record to determine the best mapping. In general, CNAs are best positioned to provide accurate CWE mapping determinations compared to third-party analysts, as CNAs are the authority for vulnerability information within their CNA scope and those closest to the products themselves.

On November 4, 2024, the team finalized any CWE mapping corrections provided by CNAs. Of the 9,900 CVE Records sent for analysis by 247 different CNAs, the team received feedback on 2,717 CVE Records (27% of total requested) from 148 CNAs (60% of total contacted) either correcting or confirming existing CWE mappings.

For CNAs who did not provide mapping feedback, the reasons varied. In many cases, the team did not receive a response. When it did, some CNAs said that they could not participate due to limited time to perform the review, especially when the team requested mappings for batches with hundreds of CVEs.

Ultimately, 7,183 CVE Records of the ‘scoped’ dataset were not manually reviewed. For these CVE Records, the team assigned the existing CWE mapping present in the CVE Record data.

CWE Mapping Normalization

For its root cause mapping efforts, the NVD typically maps CVE Records to View-1003: Weaknesses for Simplified Mapping of Published Vulnerabilities, a simplified collection of 130 weakness types. If a CVE Record cannot be mapped to an entry in View-1003, the NVD typically marks it as “NVD-CWE-Other” or “NVD-CWE-noinfo”.

Before running calculations for the Top 25 list, the entire dataset of mappings was “normalized” to View-1003. This means that any CWE mapping not present in View-1003 was changed to the next closest ancestor that is present in that view. For any mapping that did not have an ancestor in View-1003, that mapping was removed from consideration for final calculations.

Scoring

After the collection, scoping, and remapping process, a scoring formula was used to calculate a rank order of weaknesses that combines the frequency (the number of times that a CWE is the root cause of a vulnerability), with the average severity of each of those vulnerabilities when they are exploited (as measured by the Common Vulnerability Scoring System (CVSS) v3.0 or v3.1 base score). In both cases, the frequency and severity are normalized relative to the minimum and maximum values observed in the dataset. These metrics are presented as ”count“ and ”average_CVSS“, respectively in the following formulas. Due to differences in the way CVSS base scores are calculated across versions, only CVE Records that contain CVSS version 3.0 or 3.1 data were considered in the calculations.

Frequency

The scoring formula calculates the number of times a CWE was mapped to a CVE Record within the NVD.

Freq = {count(CWE_X' ∈ NVD) for each CWE_X' in NVD}

Fr(CWE_X) = (count(CWE_X ∈ NVD) - min(Freq)) / (max(Freq) - min(Freq))

Severity

The scoring formula calculates the average CVSS score of all CVE Records that map to the CWE. The equation below is used to calculate this value.

Sv(CWE_X) = (average_CVSS(CWE_X) - min(CVSS)) / (max(CVSS) - min(CVSS))

Danger Score

The level of danger presented by a particular CWE was then determined by multiplying the severity score by the frequency score.

Score(CWE_X) = Fr(CWE_X) * Sv(CWE_X) * 100

With this scoring approach:

  • Weaknesses that were rarely discovered will not receive a high Frequency score, regardless of the typical consequence associated with any exploitation. If developers are not making a particular mistake, then the weakness should not be highlighted in the CWE Top 25.
  • Weaknesses whose exploitation was of low impact will not receive a high Severity score, regardless of how common it was in the dataset. If the weakness typically results in low-impact exploited vulnerabilities, then the weakness should not be highlighted in the CWE Top 25.
  • Weaknesses that are both common and caused significant harm will receive the highest scores.

Acknowledgments

The 2024 CWE Top 25 Team includes (in alphabetical order): Alec Summers, Connor Mullaly, and Steve Christey Coley.

Very special thanks to the 148 CNAs that contributed their time and expertise to this year’s analysis.

Page Last Updated: November 18, 2024