CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE Top 25 > 2011 CWE/SANS Top 25 Questions & Answers  
ID

2011 CWE/SANS Top 25 Questions & Answers

How is this different from the OWASP Top Ten?

The short answer is that the OWASP Top Ten covers more general concepts and is focused on web applications. The CWE Top 25 covers a broader range of issues than what arise from the web-centric view of the OWASP Top Ten, such as buffer overflows. Also, one goal of the CWE Top 25 is to be at a level that is directly actionable to programmers, so it contains more detailed issues than the categories being used in the Top Ten. There is some overlap, however, since web applications are so prevalent, and some issues in the Top Ten have general applications to all classes of software.

How are the weaknesses prioritized on the list?

The 2011 list was built using a survey of 25 organizations, who ranked potential weaknesses based on their prevalence and importance, which provides some quantitative support to the final rankings.

What happened to input validation? It was number one last year, and now it's gone.

It's not gone, it's just moved to the Monster Mitigations section. A number of general-purpose CWE entries were removed from the Top 25 because they overlapped other items. This also made room for other, more specific weaknesses to be listed.

Why did you remove the threat model from the 2010 list?

While this was a useful exercise in 2009, the Top 25 has too much of a general audience to select a single threat model. This year, the use of focus profiles is intended to help prioritize items using other means.

Why don't you use hard statistics to back up your claims?

The appropriate statistics simply aren't publicly available at a low enough level of detail, but in 2009 and 2010, much progress is being made. The publicly available statistics are either too high-level or not comprehensive enough. And none of them are comprehensive across all software types and environments.

For example, in the CVE vulnerability trends report for 2006, 25% of all reported CVE's either had insufficient information, or they could not be characterized using CVE's 37 flaw type categories. That same report only covers publicly-reported vulnerabilities, so the types of issues may be distinct from what security consultants find (and they're often prohibited from disclosing their findings).

Finally, some of the Top 25 captures the primary weaknesses (root causes) of vulnerabilities - such as CWE-20, CWE-116, and CWE-73. However, these root causes are rarely reported or tracked.

Why is so much stuff related to web applications?

Even products that aren't web-based often have web applications within them - for example, a web-based management interface, an HTML converter or renderer, etc.

SQL injection (CWE-89) is not unique to web applications.

Only CWE-79 (XSS) and CWE-352 (CSRF) are unique to web applications. Other entries apply to other classes of software, although they might be extremely common in web applications, such as SQL injection. In other words, the apparent bias toward web applications isn't as bad as it might seem. To convince yourself of this, look at the observed and demonstrative examples in the individual pages on the CWE web site.

Why did you include mitigations in the Top 25 document instead of just pointing to them on the CWE site?

Since developers are one of the primary audiences, it was believed that much of their focus would be on mitigations, and many developers might want to print out the Top 25 for consultation during programming. So, it's a matter of convenience for them. In addition, the mitigations in the CWE entries may change over time. In future Top 25 lists, we will be certain that the mitigations will be consistent with their CWE entries at the time of release.

Why did you include design problems? I thought this was for programmers.

One intended audience is developers, not just programmers. Also, in some software shops, programmers do some amount of local design in addition to implementation.

What's the difference between a "weakness" and a vulnerability?

That's actually a deep philosophical question with a long, convoluted-sounding answer. The short answer is that a weakness is simply a general type of developer error, independent of the context that the error occurs in. Sometimes, it might not have any security relevance at all. However, when the weakness appears in deployed software, and it's in a place in code that may be accessible to attackers, then it could turn into a vulnerability if the conditions are right. It might include the presence of other weaknesses, for example.

Further complicating the issue are the diverse perspectives and inconsistent terminology that is used within software security, which is still a relatively new field (especially compared to, say, bridge engineering). People with different perspectives may use the same terminology, or concentrate on different aspects of a problem. For example, depending on the person speaking, the phrase "buffer overflow" could mean the type of programmer error, the type of attack, or the type of consequence. CWE (and the Top 25) have to be usable to many people with these different perspectives, so there is bound to be confusion until the industry matures.

Why does this list have so many of the usual suspects? What about the new weaknesses that are just starting to become a concern?

Consider the "On the Cusp" page as well as the focus profile for established secure developers.

How is this different from the Seven Pernicious Kingdoms taxonomy?

Seven Pernicious Kingdoms (7PK) does not prioritize the bugs that it contains, which results in bugs with varying severities and prevalence. Both Seven Pernicious Kingdoms and the Top 25 have developers as a target audience, but 7PK focuses on bugs that are introduced during implementation. The Top 25 covers problems that may arise in implementation as well as design, configuration, installation, and other SDLC phases. The Seven Pernicious Kingdoms taxonomy also uses a relatively small number of categories to make it easier for developers to remember, and the Top 25 is a flat list that tries for broader coverage at the small expense of usability. Finally, the Top 25 emerged from the direct input of dozens of individuals and is backed with extensive information in the CWE entries themselves. With all that said, we believe that Seven Pernicious Kingdoms is an important taxonomy in the understanding of software weaknesses, and its influence is reflected in much of CWE.

Page Last Updated: January 12, 2017