CWE-1434: Insecure Setting of Generative AI/ML Model Inference Parameters
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product has a component that relies on a
generative AI/ML model configured with inference parameters that
produce an unacceptably high rate of erroneous or unexpected
outputs.
Generative AI/ML models, such as those used for text generation, image synthesis, and other creative tasks, rely on inference parameters that control model behavior, such as temperature, Top P, and Top K. These parameters affect the model's internal decision-making processes, learning rate, and probability distributions. Incorrect settings can lead to unusual behavior such as text "hallucinations," unrealistic images, or failure to converge during training. The impact of such misconfigurations can compromise the integrity of the application. If the results are used in security-critical operations or decisions, then this could violate the intended security policy, i.e., introduce a vulnerability. ![]()
![]() ![]()
![]()
![]()
Example 1 Assume the product offers an LLM-based AI coding assistant to help users to write code as part of an Integrated Development Environment (IDE). Assume the model has been trained on real-world code, and the model behaves normally under its default settings. Suppose there is a default temperature of 1, with a range of temperature values from 0 (most deterministic) to 2. Consider the following configuration. (bad code)
Example Language: JSON
{
"model": "my-coding-model",
}
"context_window": 8192, "max_output_tokens": 4096, "temperature", 1.5, ... The problem is that the configuration contains a temperature hyperparameter that is higher than the default. This significantly increases the likelihood that the LLM will suggest a package that did not exist at training time, a behavior sometimes referred to as "package hallucination." Note that other possible behaviors could arise from higher temperature, not just package hallucination. An adversary could anticipate which package names could be generated and create a malicious package. For example, it has been observed that the same LLM might hallucinate the same package regularly. Any code that is generated by the LLM, when run by the user, would download and execute the malicious package. This is similar to typosquatting. The risk could be reduced by lowering the temperature so that it reduces the unpredictable outputs and has a better chance of staying more in line with the training data. If the temperature is set too low, then some of the power of the model will be lost, and it may be less capable of producing solutions for rarely-encountered problems that are not reflected in the training data. However, if the temperature is not set low enough, the risk of hallucinating package names may still be too high. Unfortunately, the "best" temperature cannot be determined a priori, and sufficient empirical testing is needed. (good code)
Example Language: JSON
{
...
}"temperature", 0.2, ...
In addition to more restrictive temperature settings, consider adding guardrails that test that independently verify any referenced package to ensure that it exists, is not obsolete, and comes from a trusted party. Note that reducing temperature does not entirely eliminate the risk of package hallucination. Even with very low temperatures or other settings, there is still a small chance that a non-existent package name will be generated.
![]()
Research Gap
This weakness might be under-reported as of CWE 4.18,
since there are no clear observed examples in
CVE. However, inference parameters may be the root cause
for various vulnerabilities - or important factors - but
the vulnerability reports may concentrate more on the
negative impact (e.g. code execution) or the weaknesses
that the insecure settings contribute to. Alternately,
dynamic techniques might not reveal the root cause if the
researcher does not have access to the underlying source
code and environment.
More information is available — Please edit the custom filter or select a different filter. |
Use of the Common Weakness Enumeration (CWE™) and the associated references from this website are subject to the Terms of Use. CWE is sponsored by the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and managed by the Homeland Security Systems Engineering and Development Institute (HSSEDI) which is operated by The MITRE Corporation (MITRE). Copyright © 2006–2025, The MITRE Corporation. CWE, CWSS, CWRAF, and the CWE logo are trademarks of The MITRE Corporation. |