CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWSS > Common Weakness Scoring System (CWSS)  
ID

Common Weakness Scoring System (CWSS) - Changes and Discussion

Summary of Changes in CWSS 0.4

Date: April 27, 2011

Thanks to the feedback from numerous organizations and individuals, CWSS 0.4 contains a number of significant changes compared to CWSS 0.3.

Separating CWSS and the new CWRAF

The most systemic change is the separation of the vignette/domain/scorecard framework from the actual metric definition. When we presented these two main ideas, it frequently caused confusion, and these two ideas usually have a different audience.

We have created the Common Weakness Risk Analysis Framework (CWRAF) to handle the vignette-related concepts. CWSS is now focused solely on the metrics and formulas, which keeps it in alignment with similar metrics efforts including CVSS, CMSS, and CCSS.

CWRAF can still be optionally used to influence how the CWSS scores are generated, but the two efforts are now logically separated. We are open to timely suggestions for an alternate acronym, although it needs to contain the words "Common," "Weakness," and "Framework."

Both CWRAF and CWSS remain part of the CWE project, which is co-sponsored by the Software Assurance program in the National Cyber Security Division (NCSD) of the US Department of Homeland Security (DHS).

Documentation and Messaging, especially CVSS

Our documentation and descriptive materials on the web site, while extensive, caused a lot of confusion and misconceptions. We have made some improvements to the web pages to hone our message. Since CWRAF and CWSS can have different audiences (one more technical than the other), the split will help us to align our material more closely with the appropriate audience. Also, the CWRAF and CWSS 0.4 web pages each contain pictures that help to explain some of the concepts more clearly. However, work still remains in this area.

We are refining our message to better emphasize how CWSS is different from CVSS. Many people apparently believe that we are creating a competitor to CVSS, when the two efforts operate mostly in different stages of the software lifecycle. People often equate "weakness" with "vulnerability," and we have not sufficiently emphasized some of the primary use-cases for CWSS, such as trying to consistently score thousands of individual findings from an automated security code scanner. At the time the findings are generated, it is not completely certain that a vulnerability exists. The CWSS 0.3 documentation contained sections on a CVSS comparison, but it was located in an appendix.

By better emphasizing the difference between the two efforts, we will be able to get more focused feedback and remove misunderstanding as a barrier to adoption.

There have been some recommendations to have CWSS "look like" CVSS as much as possible, e.g. by using the Base/Temporal/Environmental metric groups. This was attempted in CWSS 0.4 and then abandoned. Since CWSS typically operates very early in the lifetime of a discovered weakness - before it's even known to contribute to a specific vulnerability - there will be very little information available when scoring first occurs, which means that most factors change over time; that is, most factors could be "Temporal." Similarly, since CWSS is designed from the ground up to support customization, most factors are also "environmental."

We have received many comments saying that CWSS is too complex. We have addressed many of those concerns by separating CWRAF vignettes from core CWSS. Also, it seems that some reviewers believe that CWSS scoring requires manual analysis; while extensive manual analysis is required for CVSS, this is not as neccessary for CWSS. For most use-cases, we anticipate that CWSS scores will be automatically generated (or automatically updated as additional information becomes available). We envision use of an automated tool that can "interview" the consumer to obtain their context-specific requirements. Then, this interview could be fed into automated analysis results to provide appropriate context for CWSS scores. For example, the interview application could help the consumer to select a CWRAF vignette, which could then be used to state which factors are Not Applicable, influence the calculation of the Technical Impact and Business Impact, etc.

There may also be too much emphasis on the flexibility of CWSS in terms of "Not Applicable" values, as well as the use of quantified values. We might de-emphasize CWSS' customization capabilities in our documentation, since these capabilities will probably only be useful to a limited set of expert CWSS users.

Experience with CVSS has shown widespread adoption, but most consumers use scores from third-party sources such as NVD without modification. Most CVSS consumers do not use the Temporal or Environmental groups to customize CVSS scores at all. It is highly likely that this experience will be encountered in CWSS as well.

In the past, we presented multiple ways of using CWSS:

  1. in CVSS-style targeted scoring of individual findings in specific software;
  2. in generalized scoring, e.g. for Top-N lists;
  3. in aggregated scoring, to help calculate a single value that captures the overall risk of an application in terms of its unfixed weaknesses.

By giving equal weight to each of these methods, this "muddied the message" and made our priorities unclear. The present focus for CWSS is in targeted scoring. The use of generalized scoring for Top-N lists is now covered in CWRAF. Finally, for the time being, we will not be actively developing methods for aggregated scoring; however, we expect that there will be high consumer demand for such methods.

Systemic Changes to Factors

CWSS 0.3 scoring used 14 factors; CWSS 0.4 increases this number to 18. It may seem that CWSS 0.4 is going in the wrong direction of complexity. However, we will be engaging the technical community on CWSS 0.4 extensively, and we did not want to make any early decisions without sufficient community review.

Several factors were recommended for removal, but they are being preserved for the time being until they receive more extensive evaluation. The primary issues are described in the CWSS 0.4 documentation for each individual factor.

The primary reason for the growth in the number of CWSS factors is due to the development of a "layer" model that is intended to overcome one of the most significant limitations of CVSS. CWSS 0.4 represents "layers" of concern (System, Application, Network, and Enterprise), abbreviated as SANE. There is also a distintion between Required Privileges and Acquired Privileges. This "SANE" model overcomes the system-only bias of CVSS. In conjunction with factors related to technical impact, a more fine-grained description of attack scenarios can be represented using SANE layers, such as "A Guest on a network can gain Administrator privileges to an application."

CWSS 0.4 adds a "default" value for each factor, so that consumers will have something reasonable if they are unable to do their own customization. This also allows tools or services to start with a reasonable, repeatable value in low-information scenarios. For example, a tool or service might not know the remediation cost of a weakness, or whether externally-applied controls will be used in the operational environment, such as firewalls or ASLR.

The metric groups were modified and reorganized, to Base Finding, Attack Surface, and Environmental. See the previous discussion of CVSS for the rationale.

Individual Factor Changes

A Business Impact factor was created, based on feedback that the "Business Value Context" linkage with CWE Technical Impacts was not sufficient to capture certain business considerations.

The Technical Impact factor was modified to define Critical/High/Medium/Low values. This keeps CWSS more aligned with the spirit of CVSS. Quantified values are still supported; the original, more complicated calculation of Technical Impacts has been moved to CWRAF.

Formula Changes

The CWSS 0.3 scoring formula had significant limitations, by multiplying all factors together. Also, every factor had a maximum weight of 1.0. As a result, one or two factors could significantly reduce the CWSS score, and the results were not intuitive. The distibution of scores was heavily biased, with most potential scores between 0 to 2 (out of the maximum 100).

The CWSS 0.4 formula treats some factors as being more important than others, adding some factors and adjusting weights for others. The rationale is not well-documented as of this writing, but the formula will be the topic of focused investigation by the community for CWSS 0.5.

Development of CWSS 0.5

For CWSS 0.5, expected to be released in a few weeks, we will be interacting closely with the community to obtain detailed review of the factors, their weights and values, and the formula.

Thanks to all the reviewers who have helped us to make CWSS 0.4 (and CWRAF 0.4) a significant improvement over earlier versions.

Page Last Updated: January 18, 2017