CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE Top 25 > 2022  
ID

Supplemental Details - 2022 CWE Top 25


NOTICE: This is a previous version of the Top 25. For the most recent version go here.


This page provides supplemental details pertaining to the 2022 CWE Top 25 Most Dangerous Software Weaknesses list.

Table of Contents

Detailed Methodology

The NVD obtains vulnerability data from CVE and then supplements it with additional analysis and information including a mapping to one or more weaknesses, and a CVSS score, which is a numerical score representing the potential severity of a vulnerability based upon a standardized set of characteristics about the vulnerability. NVD also includes CWE mappings from the CVE Numbering Authorities (CNAs) for each CVE. NVD provides this information in a digestible format that is used for the data-driven approach in creating the 2022 CWE Top 25. This approach provides an objective look at what vulnerabilities are currently seen in the real world, creates a foundation of analytical rigor built on publicly reported vulnerabilities instead of subjective surveys and opinions, and makes the process easily repeatable.

The 2022 CWE Top 25 leverages NVD data with CVE IDs from the years 2020 and 2021, as downloaded several different times. Below are the dates for when each snapshot was downloaded. Note that this was done to stay consistent with the current downloads as much as possible while allowing for sufficient time to be able to process such a large volume of mappings.

  • 1st snapshot: December 7, 2021
  • 2nd snapshot: May 1, 2022
  • 3rd snapshot: May 31, 2022
  • 4th snapshot: June 13, 2022

The final June 13 snapshot of raw data consists of 37,899 CVE Records without a REJECTED label.

The Top 25 Team analyzes a subset of CVE Records and performs remappings that either change or agree with the existing CWE mappings found within NVD, using the lowest-level CWEs available. These remappings replace the original mappings as recorded in NVD. A "normalization" process converts the team's selected CWE to the lowest-level CWE available in View-1003. For example, CWE-122: Heap-Based Buffer Overflow is not in View-1003, so it is "normalized" to its parent base-level weakness, CWE-787: Out-of-Bounds Write, which is in View-1003. Note that the CWE Top 25 Team and NVD Team coordinate with each other to ensure that mappings are appropriately updated in NVD, but that is a separate process.

CVEs are removed from the Top 25 data set if they do not have a CVSS score, typically indicating that the CVEs have not been analyzed yet, or they were mistakenly assigned for issues that were not vulnerabilities. Similarly, any CVE whose description is labeled "** REJECT **" is removed. CVEs that are only labeled with "NVD-CWE-noinfo" or "CWE-Other" are also removed. Any CVE without a mapping to any CWE is removed.

A scoring formula is used to calculate a ranked order of weaknesses that combines the frequency that a CWE is the root cause of a vulnerability with the projected severity of its exploitation. In both cases, the frequency and severity are normalized relative to the minimum and maximum values seen.

To determine a CWE's frequency, the scoring formula calculates the number of times a CWE is mapped to a CVE within the NVD. Only those CVEs that have an associated weakness are used in this calculation, since using the entire set of CVEs within the NVD would result in lower frequency rates and reduced discrimination amongst the different weakness types.

Freq = {count(CWE_X' ∈ NVD) for each CWE_X' in NVD}

Fr(CWE_X) = (count(CWE_X ∈ NVD) - min(Freq)) / (max(Freq) - min(Freq))

The other component in the scoring formula is a weakness' severity, which is represented by the average CVSS score of all CVEs that map to the particular CWE. The equation below is used to calculate this value.

Sv(CWE_X) = (average_CVSS_for_CWE_X - min(CVSS)) / (max(CVSS) - min(CVSS))

The level of danger presented by a particular CWE is then determined by multiplying the severity score by the frequency score.

Score(CWE_X) = Fr(CWE_X) * Sv(CWE_X) * 100

There are a few properties of the methodology that merit further explanation.

  • Weaknesses that are rarely discovered will not receive a high score, regardless of the typical consequence associated with any exploitation. This makes sense, since if developers are not making a particular mistake, then the weakness should not be highlighted in the CWE Top 25.
  • Weaknesses with a low impact will not receive a high score. This again makes sense, since the inability to cause significant harm by exploiting a weakness means that weakness should be ranked below those that can.
  • Weaknesses that are both common and can cause significant harm should receive a high score.
  • Weaknesses that begin with a root cause of a mistake leading to other mistakes, thereby creating a chain relationship. In this year's analysis, the team attempted to capture chains as best as possible without any changes in the scoring. For any chain "X->Y", both X and Y were included in the analysis as if they were independently listed. Note that as the CWE Team fleshes out the chain relationships in partnership with the community, it may warrant a change in the scoring for next year and onwards. That said, this year's list is still comparable with previous years' lists, given that they did not treat chains differently than listing of multiple CWEs.

Limitations of the Methodology

There are several limitations to the data-driven approach used in creating the CWE Top 25.

Some of the most important limitations can be summarized as follows:

  • Data bias
    • Only uses NVD data based on publicly-reported CVE Records
    • Many CVEs do not have sufficient details to assign a CWE mapping, omitting them from ranking
    • There may be over-representation of certain programming languages, frameworks, or weakness-detection techniques
  • Metric bias
    • Indirect prioritization of implementation faults over design flaws
    • In practice, prefers frequency over severity due to distributions of real-world data

Some of this bias will be explained in more detail below.

Data Bias

First, the approach only uses data that was publicly reported and captured in the NVD, and numerous vulnerabilities exist that do not have CVE IDs. Vulnerabilities that are not included in the NVD are therefore excluded from this approach. For example, CVE/NVD typically does not cover vulnerabilities found and fixed before any system has been publicly released, in online services, or in bespoke software that is internal to a single organization. Weaknesses that lead to these types of vulnerabilities may be under-represented in the 2022 CWE Top 25.

Second, even for vulnerabilities that receive a CVE, often there is not enough information to make an accurate (or precise) identification of the appropriate CWE being exploited. Many CVE Records are published by vendors who only describe the impact of the vulnerability without providing details of the vulnerability itself. For example, at least 2,507 CVEs from 2020 and 2021 did not have sufficient information to determine the underlying weakness. In other cases, the CVE description covers how the vulnerability is attacked - but this does not always indicate what the associated weakness is. For example, if a long input to a program causes a crash, the cause of the crash could be due to a buffer overflow, a reachable assertion, excessive memory allocation, an unhandled exception, etc. These all correspond to different, individual CWEs. In other CVE Records, only generic terms are used such as "malicious input," which gives no indication of the associated weakness. For some entries, there may be useful information available in the references, but it is difficult to analyze. For example, a researcher might use a fuzzing program that generates a useful test case that causes a crash, but the developer simply fixes the crash without classifying and reporting what the underlying mistake was.

Third, there is inherent bias in the CVE/NVD dataset due to the set of vendors that report vulnerabilities and the languages that are used by those vendors. If one of the largest contributors to CVE/NVD primarily uses C as its programming language, the weaknesses that often exist in C programs are more likely to appear. Fuzzing programs can be very effective against memory-based programs, so they may find many more vulnerabilities. The scoring metric outlined above attempts to mitigate this bias by looking at more than just the most frequently reported CWEs; it also takes into consideration average CVSS score.

Another bias in the CVE/NVD dataset is that most vulnerability researchers and/or detection tools are very proficient at finding certain weaknesses but not others. Those types of weakness that researchers and tools struggle to find will end up being under-represented within the 2022 CWE Top 25.

Finally, gaps or suspected mischaracterizations of the CWE hierarchy itself lead to incorrect mappings. The ongoing remapping work helps the CWE Team learn about these content gaps and issues, which will be addressed in subsequent CWE releases.

Metric Bias

An important bias to understand related to the metric is that it indirectly prioritizes implementation flaws over design flaws, due to their prevalence within individual software packages. For example, a web application may have many different cross-site scripting (XSS) vulnerabilities due to large attack surface, yet only one instance of weak authentication that could compromise the entire application. An alternate metric could be devised that includes the percentage of products within NVD that have at least one CVE with a particular CWE. This kind of metric is often used by application security vendors in their annual analyses.

Comparison to Measurements of the Most Significant Software Security Weaknesses (MSSW)

One metric limitation was raised in December 2020 by Galhardo, Bojanova, Mell, and Gueye in their ACSC paper "Measurements of the Most Significant Software Security Weaknesses". The authors "find that the published equation highly biases frequency and almost ignores exploitability and impact in generating top lists of varying sizes. This is due to the differences in the distributions of the component metric values." Their proposed scoring formula uniformly distributes the frequency component of weakness scores within the range [0,1). Dropping high prevalence low severity weaknesses from the CWE Top 25 and replacing those weakness with less frequent, but proportionally higher severity ones. Mathematically, the redistribution is performed via double logging the frequency data to correct for the exponential distribution of weakness frequencies: log(log(Number of CVEs with CWE_x)). The goal is to distribute CWE frequency scores evenly about the range [0,1).

The Top 25 Team implemented an experimental version of MSSW and found that the original critique seems to apply in this year's list as well. For example, consider how CWE-79 is ranked #2, but it has the lowest average CVSS score (5.73) of the entire Top 25 and those On the Cusp. Additionally, several weaknesses that are in the standard Top 25 fall more than 10 positions. The CWE-79 ranking mentioned was performed by the Top 25 Team while omitting some of the MSSW suggestions. It does not split the Top 25 into two CWE top 20 lists based on higher-level CWEs (pillars/classes) and lower-level CWEs (bases/variants/compounds). The CWE Team did apply the MSSW suggestions to split higher-lower abstractions into two lists as described by NIST and saw similar results to weaknesses like CWE-79 but more detailed data analysis is still required before full results of the modifications can be shared.

The MSSW paper highlighted a significant concern with the weight of frequency in the Top 25 calculation. The traditional equation currently used by the Top 25, which NIST calls the MDSE, weighs frequency and severity equally. Practically, this means the MDSE equates a 100% increase in frequency to a doubling in severity. Since CVSS scores are capped at 10 (from 0.0 to 10.0 in increments of 0.1), there can be at most 100 unique scores, limiting the difference in CVSS scores to 2 orders of magnitude from this lowest possible score to the highest possible score. For frequencies as low as 1 to as high as 4,740, this represents a higher order of magnitude.

As the NIST team points out, equal weighting does not uniformly distribute the exponential distribution of CWE frequency scores to the range [0,1). Given the exponential distribution of weakness frequencies, values of the frequency term in the MDSE are squished into the lower range of [0,1) while severities experience a more balanced distribution, although CVSS scores also generally skew to the right due to publication bias of higher-severity vulnerabilities.

Comparison to Mason Vulnerability Scoring Framework

The Top 25 Team worked with a group of researchers from George Mason University (GMU), specifically Massimiliano Albanese, to better understand the similarities and differences of their proposed Top-N methodology. After several discussions and careful analysis of their work published in "Vulnerability Metrics for Graph-Based Configuration Security", the Top 25 Team believes this provides a similar yet different approach. The GMU team compares their data and approach to the 2020 Top 25 lists as well as the NVD data for the years 2018 and 2019. In their comparison to the Top 25 rankings, they find roughly 90% correlation for the years 2018 and 2019, respectively. In addition to offering the verification against the 2020 Top 25, the GMU team leverages IDS rules to align CVEs to what can be detected by IDS rules. In short, "Vulnerability Metrics for Graph-Based Configuration Security" and Mason Vulnerability Scoring Framework provide the following key contributions: "(i) a general and extensible formal approach to assess the likelihood that an attacker will attempt to exploit a vulnerability as well as the impact that a successful exploitation would entail; (ii) the use of Intrusion Detection System (IDS) rules in the computation of both likelihood and impact; and (iii) a set of metrics to complement graph models built around vulnerability graphs, including but not limited to the multi-layer graphs generated by SCIBORG [a framework that improves the security posture of distributed systems by examining the impact of configuration changes across interdependent components]."

Considerations for Independently Replicating the Top 25

Some parties might wish to independently replicate the calculation of the Top 25. While the CWE Top 25 Team supports independent replication, the following considerations must be made:

  • NVD regularly adds or changes older CVE Records, regularly publishing new CVE-2020-xxxx/CVE-2021-xxxx entries. There are many reasons for the dynamic nature of NVD that are outside the scope of this document.
  • The NVD Team might not integrate all suggested mappings by the Top 25 Team. This is especially the case when a single CVE has multiple CWEs in chaining relationships, as NVD is not yet set up to represent chains.
  • View-1003 can change over the years and across CWE versions, affecting normalization.

Details of Problematic Mappings

This section provides further detail as to CWEs that are commonly cited as root cause weaknesses during vulnerability disclosure that are either inappropriate or uninformative. While these entries are important to understanding the hierarchy of weaknesses in CWE, they can be "problematic" when used for vulnerability mapping to weakness types.

Generally, the most problematic CWEs have one or more of the following problems:

  • Confusion between phrases indicating technical impact of a vulnerability that does not indicate a separate weakness (e.g., "privilege escalation" or "information leak")
  • CWE at a high level of abstraction when lower-level CWEs are likely known to the party disclosing the vulnerability
  • Use of vague phrasing when more precise phrasing could be useful
  • Inconsistencies in phrasing by a wide range of vulnerabilities

The most problematic CWEs are listed below, along with a description of the difficulties that were encountered while performing remapping for the 2022 Top 25.

CWE IDName / Explanation
CWE-200Exposure of Sensitive Information to an Unauthorized Actor
Many CVEs only described a technical impact from some other weakness, as opposed to describing the mistake (weakness) that caused the information disclosure. Many vulnerabilities with low details, and allowing breach of confidentiality, were mapped to CWE-200. However, over 300 different weaknesses have confidentiality-related impacts such as reading memory, files, etc. More than 100 CVEs were originally mapped to CWE-200 but were remapped to NVD-CWE-noinfo. Even more CVEs would have been remapped to NVD-CWE-noinfo, except for the 2022 methodology change that forced selection of a CNA-chosen CWE mapping if no other information was available.
CWE-20Improper Input Validation
This was often presented as the sole phrase describing a weakness, with no additional details. Chaining analysis in 2022 unsurprisingly revealed a variety of other weaknesses related to memory safety, injection, and others.
CWE-269Improper Privilege Management
This was often used in low-information scenarios describing a technical impact like "privilege escalation," but probably mapped because it mentions privileges. Over 300 CVEs were remapped to NVD-CWE-noinfo.
CWE-732Incorrect Permission Assignment for Critical Resource
While the name itself indicates an assignment of permissions for resources, this is often misused in vulnerabilities in which "permissions" are not checked (an authorization issue).
CWE-77Improper Neutralization of Special Elements used in a Command ('Command Injection')
When more details are available for a vulnerability, it is often CWE-78 (OS Command Injection). Over 150 CVEs were originally mapped to CWE-77 and remapped to CWE-78 upon closer inspection. More than 200 CVEs were kept at CWE-77, mostly because the only phrase was "command injection" as opposed to injection of some other kind of command besides an OS command.
CWE-284Improper Access Control
Very high-level. Many CVEs simply said "Improper Access Control" without additional details, but it is suspected that people with knowledge of the details (e.g., the CNAs) could use lower-level options related to authorization, authentication, privileges, etc. A complicating factor is that there are many different interpretations of what the phrase "access control" really means.
CWE-668Exposure of Resource to Wrong Sphere
Used as a catch-all, when sometimes more specific CWEs can help. Possible overlap with other CWEs. During 2022 remapping, over 50 CVEs were remapped to NVD-CWE-noinfo.
Categories
Some organizations still map to categories. This has been an actively discouraged practice since at least 2019, since categories are informal organizational groupings of weaknesses that help navigation and browsing by CWE users, and they are not weaknesses in themselves.
Pillars and high-level classes
Some organizations map to Pillars and high-level Classes (which are children of pillars in the Research View, CWE-1000). By their nature, Pillars have very little technical detail, as do high-level Classes. While high-level Classes and Pillars can be useful for education from a theoretical perspective, they are not sufficiently precise for describing the causes of real-world vulnerabilities.

This list is illustrative, but not comprehensive. In the future, the Top 25 Team hopes to provide tools and capabilities that identify CWE IDs that are discouraged or prohibited from use when mapping vulnerabilities to their root cause CWEs.

Emerging Opportunities for Improvement

Despite the current limitations of the remapping task, several activities have taken shape recently that might show positive improvements for NVD/CWE mapping data as used in future Top 25 lists:

  • NIST's Collaborative Vulnerability Metadata Acceptance Process (CVMAP) program continues to gain traction, with positive interactions with CVE Candidate Numbering Authorities (CNAs) that are likely to improve CWE mapping quality from those CNAs. Integration of CVMAP into remapping in 2022 can provide useful data with which to provide specific feedback to CNAs, both within CVMAP and outside of it. In Q3 and Q4 of 2022, the CWE Program plans to have closer interaction with CNAs to help them provide more precise data.
  • Version 5.0 of the CVE JSON Record format includes direct support for including CWE mappings in CVE Records, which seems likely to improve the quality and precision of CWE mappings. Adoption is still under way, but significant progress has been made since 2021.
  • In March 2021, the CWE Program released CVE->CWE Mapping Guidance, which makes it easier for CNAs and other parties to perform the technical task of finding appropriate CWE mappings for their vulnerabilities. Custom guidance has been developed internally for various types of weaknesses and shared with NIST; providing this guidance more widely may help. As the Top 25 methodology uses NVD data from the previous two calendar years, the impact of this guidance is only starting to appear in the data itself.
  • Additional training programs may help to educate CNAs, vulnerability researchers, and others to provide more consistent, precise CWE mappings.
  • More education about "problematic CWEs" that are frequently misused or mapped inappropriately, possibly including some tooling for organizations to assess their own CWE usage. More details are in another section.

Community-Wide Strategies for Improving Mappings

While the Top 25 has improved year-over-year since 2019, the overall rate of change remains relatively slow, as reflected in the percentage of classes that are still in the 2022 list. The Top 25 team believes that greater community engagement will help to improve the quality of future Top 25 lists, and the overall quality and precision of CWE mappings for reported CVEs.

Over the next six months, the Top 25 Team will consider changes such as:

  • Improving the expressiveness of View-1003. Some high-level CWEs don't have many children under View-1003 including CWE-20, CWE-200, and others. Sometimes higher-level CWEs are not available and may force third-party analysts to choose inappropriate lower-level mappings. For example, "Improper Access Control" as a phrase has the same name as CWE-284, but if there are no other details, an analyst might be tempted to remap to a lower-level CWE even if it is not certain.
  • Conducting more direct engagement with CNAs:
    • Work directly with the CVE Quality Working Group (QWG) to help CNAs improve their mappings
    • Work with individual CNAs who publish many CVEs and are relatively mature in their CWE adoption but can improve their own CWE support in ways that influence their consumers and the community
    • Work with community stakeholders to improve understanding and distinguishing between technical impacts and weaknesses
    • Provide training tutorials at community gatherings (e.g., CVE Global Summit or a public educational series)
  • Providing a capability for external parties - CNAs, vulnerability researchers, and others - to assess whether they are mapping to commonly-misused CWEs.

Possibilities for the Future of the Top 25

The Top 25 Team has followed primarily the same methodology for the past four years. The team is likely to make major modifications next year, which will affect how the list is generated and may cause significant shifts. For example:

  • Support generation of custom or more specialized, domain-specific Top-N lists (e.g., Top 25 for mobile applications or web applications). This year, the CISA KEV list is presented as an example. The team also generated an experimental Top-N for all CISA ICSA advisories (from 2020 and 2021), which had even more variability compared to the main list and shows promise for the benefit of customized Top-N lists. The CWE team has heard that new Top-N list publications would be welcomed by the community.
  • Analysis could be adjusted to focus on CVEs that are published during a given year (or 2-year period), instead of selecting CVEs based on the year in their ID (e.g., CVE-2020-xxxx/CVE-2021-xxxx). As a result, there would be less of a need to update NVD versions in the middle of the remapping activity, which introduces unpredictability. However, such an approach would need to leave enough time for CVEs to "mature" and evolve (i.e., get additional references, description changes, CWE mappings from CNAs, etc.) - otherwise Top 25 analysts will have less detailed data to work with, increasing the odds of lower-quality mappings and reducing the accuracy and precision of the Top 25 list.
  • Reconsider the use of a 2-year time window for the NVD dataset - only do data from previous year.
  • Consider changing the metrics used to generate the list to minimize some of the bias as discussed in the Limitations of the Methodology section.
  • Change the sampling methodology in ways that might better lead to higher-quality mappings in the future. For example, CWE-120: Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') seems likely to be over-used because it is one of the few CWEs with "buffer overflow" in its name; CNAs might mistakenly map to it if they are not careful enough to identify the core mistake, i.e., not checking the input size at all. In other cases, the close association between CWE-352 (CSRF) and CWE-79 (XSS) can cause confusion and incorrect mappings, because sometimes CSRF is a prerequisite for an attacker to exploit a different weakness, or a way to increase the scope of an attack, and analysts might map to CWE-352 even when the vulnerability is about a different weakness. CWE-120 and many other lower-level CWEs have not received significant attention because they are not a class.
  • Reduce the need to conduct large numbers of remappings. The Top 25 team believes that performing so many remappings annually is not sustainable over the long term, as they are labor-intensive and time-consuming. This problem is symptomatic of a more pressing need: improvement in mapping quality by the community. Community engagement and training may lessen the need for remappings. An emphasis on domain-specific Top-N lists may require fewer mappings as well. Fewer remaps could be performed outright, but this could indirectly cause some shifts within the resulting Top 25 because classes might not get the attention they currently deserve, and inappropriate CNA mappings might be missed. Since the ultimate goal is for the community to produce higher-quality mappings, greater community engagement is necessary.
  • Enhance View-1003. This view has not received a close review in a couple years, potentially reducing the ability to track weaknesses that are new to the CWE corpus. In addition, some high-level classes within View-1003 have few children that could be added to give more precise options to users. For example, CWE-20 only has one child under View-1003 as of CWE 4.8, but it may be helpful to include more-specific children related to validation of quantities, equivalence, syntax, etc. An informal, preliminary analysis suggests that some of these changes could significantly influence the rankings for some CWEs.
  • Perform "normalization" using a different view besides View-1003. This would cause clear discrepancies that would make replication of results more difficult.
  • Possibly do more regular remaps throughout the year, which can give more timely feedback to NVD staff, CNAs, and others so that community-wide improvements can be made sooner. This might be one way to manage the large number of remaps that are currently compressed into a short time frame.
  • Incorporate chain relationships properly in the methodology and work with the NIST NVD Analysis Team as well as the community to determine the best approach for scoring them.

Note that even if these changes are ultimately successful and better-quality mappings are produced, the benefits might not be realized for some time; for example, as of the 2022 Top 25 release date, the first 6 months of 2022 data is using older methodology.

Acknowledgments

The 2022 CWE Top 25 Team includes (in alphabetical order): Alec Summers, Cathleen Zhang, Connor Mullaly, David Rothenberg, Jim Barry Jr., Kelly Todd, Luke Malinowski, Robert L. Heinemann, Jr., Rushi Purohit, Steve Christey Coley, and Trent DeLor. Members of the NIST NVD Analysis Team that coordinated on the Top 25 include Aleena Deen, Christopher Turner, David Jung, Robert Byers, Tanya Brewer, Tim Pinelli, and Vidya Ananthakrishna. Finally, thanks also to the broader CWE community for suggesting improvements to the process.

Archive

Past versions of the CWE Top 25 are available in the Archive.

Page Last Updated: June 21, 2023