CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE List > CWE- Individual Dictionary Definition (4.15)  
ID

CWE-1039: Automated Recognition Mechanism with Inadequate Detection or Handling of Adversarial Input Perturbations

Weakness ID: 1039
Vulnerability Mapping: ALLOWEDThis CWE ID could be used to map to real-world vulnerabilities in limited situations requiring careful review (with careful review of mapping notes)
Abstraction: ClassClass - a weakness that is described in a very abstract fashion, typically independent of any specific language or technology. More specific than a Pillar Weakness, but more general than a Base Weakness. Class level weaknesses typically describe issues in terms of 1 or 2 of the following dimensions: behavior, property, and resource.
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers. For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts. For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers. For users who wish to see all available information for the CWE/CAPEC entry. For users who want to customize what details are displayed.
×

Edit Custom Filter


+ Description
The product uses an automated mechanism such as machine learning to recognize complex data inputs (e.g. image or audio) as a particular concept or category, but it does not properly detect or handle inputs that have been modified or constructed in a way that causes the mechanism to detect a different, incorrect concept.
+ Extended Description

When techniques such as machine learning are used to automatically classify input streams, and those classifications are used for security-critical decisions, then any mistake in classification can introduce a vulnerability that allows attackers to cause the product to make the wrong security decision. If the automated mechanism is not developed or "trained" with enough input data, then attackers may be able to craft malicious input that intentionally triggers the incorrect classification.

Targeted technologies include, but are not necessarily limited to:

  • automated speech recognition
  • automated image recognition

For example, an attacker might modify road signs or road surface markings to trick autonomous vehicles into misreading the sign/marking and performing a dangerous action.

+ Common Consequences
Section HelpThis table specifies different individual consequences associated with the weakness. The Scope identifies the application security area that is violated, while the Impact describes the negative technical impact that arises if an adversary succeeds in exploiting this weakness. The Likelihood provides information about how likely the specific consequence is expected to be seen relative to the other consequences in the list. For example, there may be high likelihood that a weakness will be exploited to achieve a certain impact, but a low likelihood that it will be exploited to achieve a different impact.
ScopeImpactLikelihood
Integrity

Technical Impact: Bypass Protection Mechanism

When the automated recognition is used in a protection mechanism, an attacker may be able to craft inputs that are misinterpreted in a way that grants excess privileges.
+ Relationships
Section HelpThis table shows the weaknesses and high level categories that are related to this weakness. These relationships are defined as ChildOf, ParentOf, MemberOf and give insight to similar items that may exist at higher and lower levels of abstraction. In addition, relationships such as PeerOf and CanAlsoBe are defined to show similar weaknesses that the user may want to explore.
+ Relevant to the view "Research Concepts" (CWE-1000)
NatureTypeIDName
ChildOfPillarPillar - a weakness that is the most abstract type of weakness and represents a theme for all class/base/variant weaknesses related to it. A Pillar is different from a Category as a Pillar is still technically a type of weakness that describes a mistake, while a Category represents a common characteristic used to group related things.697Incorrect Comparison
ChildOfPillarPillar - a weakness that is the most abstract type of weakness and represents a theme for all class/base/variant weaknesses related to it. A Pillar is different from a Category as a Pillar is still technically a type of weakness that describes a mistake, while a Category represents a common characteristic used to group related things.693Protection Mechanism Failure
+ Modes Of Introduction
Section HelpThe different Modes of Introduction provide information about how and when this weakness may be introduced. The Phase identifies a point in the life cycle at which introduction may occur, while the Note provides a typical scenario related to introduction during the given phase.
PhaseNote
Architecture and DesignThis issue can be introduced into the automated algorithm itself.
+ Applicable Platforms
Section HelpThis listing shows possible areas for which the given weakness could appear. These may be for specific named Languages, Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms. The platform is listed along with how frequently the given weakness appears for that instance.

Languages

Class: Not Language-Specific (Undetermined Prevalence)

Technologies

AI/ML (Undetermined Prevalence)

+ Weakness Ordinalities
OrdinalityDescription
Primary
(where the weakness is a quality issue that might indirectly make it easier to introduce security-relevant weaknesses or make them more difficult to detect)
This weakness does not depend on other weaknesses and is the result of choices made during optimization.
+ Memberships
Section HelpThis MemberOf Relationships table shows additional CWE Categories and Views that reference this weakness as a member. This information is often useful in understanding where a weakness fits within the context of external information sources.
NatureTypeIDName
MemberOfCategoryCategory - a CWE entry that contains a set of other entries that share a common characteristic.1413Comprehensive Categorization: Protection Mechanism Failure
+ Vulnerability Mapping Notes

Usage: ALLOWED-WITH-REVIEW

(this CWE ID could be used to map to real-world vulnerabilities in limited situations requiring careful review)

Reason: Abstraction

Rationale:

This CWE entry is a Class and might have Base-level children that would be more appropriate

Comments:

Examine children of this entry to see if there is a better fit
+ Notes

Relationship

Further investigation is needed to determine if better relationships exist or if additional organizational entries need to be created. For example, this issue might be better related to "recognition of input as an incorrect type," which might place it as a sibling of CWE-704 (incorrect type conversion).
+ References
[REF-16] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus. "Intriguing properties of neural networks". 2014-02-19. <https://arxiv.org/abs/1312.6199>.
[REF-17] OpenAI. "Attacking Machine Learning with Adversarial Examples". 2017-02-24. <https://openai.com/research/attacking-machine-learning-with-adversarial-examples>. URL validated: 2023-04-07.
[REF-15] James Vincent. "Magic AI: These are the Optical Illusions that Trick, Fool, and Flummox Computers". The Verge. 2017-04-12. <https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence>.
[REF-13] Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang and Carl A. Gunter. "CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition". 2018-01-24. <https://arxiv.org/pdf/1801.08535.pdf>.
[REF-14] Nicholas Carlini and David Wagner. "Audio Adversarial Examples: Targeted Attacks on Speech-to-Text". 2018-01-05. <https://arxiv.org/abs/1801.01944>.
+ Content History
+ Submissions
Submission DateSubmitterOrganization
2018-03-12
(CWE 3.1, 2018-03-29)
CWE Content TeamMITRE
+ Modifications
Modification DateModifierOrganization
2019-06-20CWE Content TeamMITRE
updated References
2020-02-24CWE Content TeamMITRE
updated Relationships
2023-04-27CWE Content TeamMITRE
updated References, Relationships
2023-06-29CWE Content TeamMITRE
updated Mapping_Notes
2024-07-16
(CWE 4.15, 2024-07-16)
CWE Content TeamMITRE
updated Applicable_Platforms
Page Last Updated: July 16, 2024