CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE Top 25 > Process  
ID

Process
Process

The 2010 version of the Top 25 list builds on the original 2009 version. Approximately 40 software security experts provided feedback, including software developers, scanning tool vendors, security consultants, government representatives, and university professors. Representation was international.

The primary means of communication was through a private discussion list, with the most activity occurring over a period of about 6 weeks. In 2009, there were multiple iterations of drafts. This year, discussion was more focused on one or two areas at a time, and drafts of smaller sections were posted for review.

Many Top 25 contributors advocated using a quantitative, data-driven approach. However, while there is more data available in 2010 than there was in 2009, it still does not have sufficient scale or precision. It is still heartening to see more raw data being generated.

The construction and development of the Top 25 occurred over the following phases. Note that these phases are approximate, since there many activities overlapped.

Preparation of the Nominee List

Top 25 participants were asked to re-evaluate the 2009 Top 25. For each entry, they were asked whether to "Keep" or "Remove" the entry in the 2010 list. They were also given a list of the "On the Cusp" items from the 2009 Top 25, as well as additional entries that have appeared more frequently in CVE data in recent years. Top 25 participants could suggest new entries for addition - whether from "On the Cusp" or their own nominees. The 2009 Top 25 was restructured to move some original entries to the mitigations section, and to provide lower-level entries instead of abstract ones. In some cases, this forced the creation of new CWE entries (See Appendix B for details). If there was active advocacy for any potential entry, then it was added to the Nominee List.

Selection of Factors for Evaluation

After some brief discussion, it was decided to use two factors, Prevalence and Importance. While stringent definitions were originally desired for each, more flexible definitions were created to allow for diverse roles within the software security community.

Prevalence would be evaluated according to this criterion: "For each 'project' (whether a software package, pen test, educational effort, etc.), how often does this weakness occur or otherwise pose a problem?"

For Importance, the criterion was: "If this weakness appears in software, what priority do you use when making recommendations to your consumer base? (e.g. to fix, mitigate, educate)."

Nominee List Creation and Voting

After the factors were decided and the final Nominee List was created (with a total of 41 entries), participants were given a voting ballot containing these nominees. For each nominee entry, the ballot provided space for the voter to provide a Prevalence rating, an Importance rating, and any associated comments.

Participants were given approximately one week to evaluate the items and vote on them. This was ultimately extended for three additional days.

Since the voting process was conducted by manually filling out a form, some inconsistencies and incomplete results occurred. This would trigger an exchange of email until the final ballot was selected. The main area of contention was the limitation of 4 ratings each for "Critical" importance and "Widespread" prevalence.

Voting rules were refined to ensure that each organization only submitted one ballot, to avoid the possibility that a small number of organizations could bias the results too much. Generally, organizations that needed to merge multiple ballots foundthe exercise informative in clarifying their own perspectivesamongst each other.

Selection of Metrics

During the voting period, participants were provided with several possible methods for scoring voting ballots and devising metrics. Some proposals added the two Prevalence and Importance factors together. One proposal used the squares of each factor. Other proposals used different weights or value ranges. Some proposed metrics suggested using higher values for "Widespread" prevalence and "Critical" importance, since these were artificially limited to 4 ratings for each factor per voter.

The selected metrics were then evaluated by a skilled statistician for validity using several methods including chi-square.

The final selection of a metric took place once the validation was complete.

Selection of Final List (General Ranking)

After the selection of the metric, and the remaining votes were tallied, the Final List was selected using the following process.
  • For each weakness in the Nominee List, all associated votes were collected. Each single vote had a Prevalence and Importance rating. A sub-score was created for this vote using the selected metric. For each weakness, the sub-scores were all collected and added together.
  • The Nominee List was sorted based on the aggregate scores.
  • The weaknesses with the 25 highest scores were selected from the sorted list.
  • The remaining weaknesses were added to the "On the Cusp" list.
  • Some of the originally-proposed metrics were considered for use in additional Focus Profiles.
Page Last Updated: January 12, 2017