Frequently Asked Questions (FAQ)
Frequently Asked Questions (FAQ)
How is this different from the OWASP Top Ten?
The short answer is that the OWASP Top Ten covers more general
concepts and is focused on web applications. The CWE Top 25 covers a
broader range of issues than what arise from the web-centric view of
the OWASP Top Ten, such as buffer overflows. Also, one goal of the
CWE Top 25 is to be at a level that is directly actionable to
programmers, so it contains more detailed issues than the categories
being used in the Top Ten. There is some overlap, however, since web
applications are so prevalent, and some issues in the Top Ten have
general applications to all classes of software.
How are the weaknesses prioritized on the list?
With the exception of Input Validation being listed as number 1
(partially for educational purposes), there is no concrete
prioritization. Prioritization differs widely depending on the
audience (e.g. web application developers versus OS developers) and
the risk tolerance (whether code execution, data theft, or denial of
service are more important). It was also believed that the use of
categories would help the organization of the document, and
prioritization would impose a different ordering.
Why are you including overlapping concepts like input validation and
XSS, or incorrect calculation and buffer overflows? Why do you have
mixed levels of abstraction?
While it would have been ideal to have a fixed level of abstraction
and no overlap between weaknesses, there are several reasons why this
was not achieved.
Contributors sometimes suggested different CWE identifiers that were
closely related. In some cases, this difference was addressed by
using a more abstract CWE identifier that covered the relevant cases.
In other situations, there was strong advocacy for including
lower-level issues such as SQL injection and cross-site scripting, so
these were added. The general trend, however, was to use more
abstract weakness types.
While it might be desired to minimize overlap in the Top 25, many
vulnerabilities actually deal with the interaction of 2 or more
weaknesses. For example, external control of user state data
(CWE-642) could be an important weakness that enables cross-site
scripting (CWE-79) and SQL injection (CWE-89). To eliminate overlap
in the Top 25 would lose some of this important subtlety.
Finally, it was a conscious decision that if there was enough
prevalence and severity, design-related weaknesses would be included.
These are often thought of as being more abstract than weaknesses that
arise during implementation.
The Top 25 list tries to strike a delicate balance between usability
and relevance, and we believe that it does so, even with this apparent
imperfection.
Why don't you use hard statistics to back up your claims?
The appropriate statistics simply aren't publicly available. The
publicly available statistics are either too high-level or not
comprehensive enough. And none of them are comprehensive across all
software types and environments.
For example, in the CVE vulnerability trends report for 2006, 25% of
all reported CVE's either had insufficient information, or could not
be characterized using CVE's 37 flaw type categories. That same
report only covers publicly-reported vulnerabilities, so the types of
issues may be distinct from what security consultants find (and
they're often prohibited from disclosing their findings).
Finally, some of the Top 25 captures the primary weaknesses (root
causes) of vulnerabilities - such as CWE-20, CWE-116, and CWE-73.
However, these root causes are rarely reported or tracked.
Why is so much stuff related to web applications?
Only CWE-79 (XSS) and CWE-352 (CSRF) are unique to web applications.
Other entries apply to other classes of software, although they might
be extremely common in web applications, such as SQL injection. In
other words, the apparent bias toward web applications isn't as bad as
it might seem. To convince yourself of this, look at the observed and
demonstrative examples in the individual pages on the CWE web site.
Why did you include mitigations in the Top 25 document instead of just
pointing to them on the CWE site?
Since developers are one of the primary audiences, it was believed
that much of their focus would be on mitigations, and many developers
might want to print out the Top 25 for consultation during
programming. So, it's a matter of convenience for them. In addition,
the mitigations in the CWE entries may change over time. In future
Top 25 lists, we will be certain that the mitigations will be
consistent with their CWE entries at the time of release.
Why did you include design problems? I thought this was for
programmers.
One intended audience is developers, not just programmers. Also, in
some software shops, programmers do some amount of local design in
addition to implementation.
What's the difference between a "weakness" and a vulnerabilitiy?
That's actually a deep philosophical question with a long,
convoluted-sounding answer. The short answer is that a weakness is
simply a general type of developer error, independent of the context
that the error occurs in. Sometimes, it might not have any security
relevance at all. However, when the weakness appears in deployed
software, and it's in a place in code that may be accessible to
attackers, then it could turn into a vulnerability if the conditions
are right. It might include the presence of other weaknesses, for
example.
Further complicating the issue are the diverse perspectives and
inconsistent terminology that is used within software security, which
is still a relatively new field (especially compared to, say, bridge
engineering). People with different perspectives may use the same
terminology, or concentrate on different aspects of a problem. For
example, depending on the person speaking, the phrase "buffer
overflow" could mean the type of programmer error, the type of attack,
or the type of consequence. CWE (and the Top 25) have to be usable to
many people with these different perspectives, so there is bound to be
confusion until the industry matures.
Why does this list have so many of the usual suspects? What about the
new weaknesses that are just starting to become a concern?
We received great feedback from software vendors and consultants who
are probably far ahead of the curve with respect to the average
developer in terms of code security. As such, they have typically
addressed basic problems and are running into newer weaknesses. We
are trying to balance the experiences of these cutting-edge experts
with what the everyday developer may encounter.
Those who are further on the cutting edge can consider examining the
lower-level CWE entries (e.g. CWE-119 has some newer variants). They
can also review the "On the Cusp" appendix, which lists other CWEs
that almost made it onto the Top 25. This appendix is on the CWE web
site.
How is this different from the Seven Pernicious Kingdoms taxonomy?
Seven Pernicious Kingdoms (7PK) does not prioritize the bugs that it
contains, which results in bugs with varying severities and
prevalence. Both Seven Pernicious Kingdoms and the Top 25 have
developers as a target audience, but 7PK focuses on bugs that are
introduced during implementation. The Top 25 covers problems that may
arise in implementation as well as design, configuration,
installation, and other SDLC phases. The Seven Pernicious Kingdoms
taxonomy also uses a relatively small number of categories to make it
easier for developers to remember, and the Top 25 is a flat list that
tries for broader coverage at the small expense of usability.
Finally, the Top 25 emerged from the direct input of dozens of
individuals and is backed with extensive information in the CWE
entries themselves. With all that said, we believe that Seven
Pernicious Kingdoms is an important taxonomy in the understanding of
software weaknesses, and its influence is reflected in much of CWE.
More information is available — Please edit the custom filter or select a different filter.
|