Home > CWE List > VIEW SLICE: CWE-1432: Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List (4.18) |
|
CWE VIEW: Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List
CWE entries in this view are listed in
the 2025 CWE Most Important Hardware Weaknesses List, as
determined by the Hardware CWE Special Interest Group (HW
CWE SIG). The 2025 MIHW aims to drive awareness of
critical hardware weaknesses and provide the cybersecurity
community with practical guidance to prevent security
issues at the source. By combining advanced data analysis
with expert consensus, the list helps organizations
prioritize mitigations, strengthen design practices, and
make informed decisions throughout the hardware
lifecycle.
The following graph shows the tree-like relationships between
weaknesses that exist at different levels of abstraction. At the highest level, categories
and pillars exist to group weaknesses. Categories (which are not technically weaknesses) are
special CWE entries used to group weaknesses that share a common characteristic. Pillars are
weaknesses that are described in the most abstract fashion. Below these top-level entries
are weaknesses are varying levels of abstraction. Classes are still very abstract, typically
independent of any specific language or technology. Base level weaknesses are used to
present a more specific type of weakness. A variant is a weakness that is described at a
very low level of detail, typically limited to a specific language or technology. A chain is
a set of weaknesses that must be reachable consecutively in order to produce an exploitable
vulnerability. While a composite is a set of weaknesses that must all be present
simultaneously in order to produce an exploitable vulnerability.
Show Details:
1432 - Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
226
(Sensitive Information in Resource Not Removed Before Reuse)
The product releases a resource such as memory or a file so that it can be made available for reuse, but it does not clear or "zeroize" the information contained in the resource before the product performs a critical state transition or makes the resource available for reuse by other entities.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1189
(Improper Isolation of Shared Resources on System-on-a-Chip (SoC))
The System-On-a-Chip (SoC) does not properly isolate shared resources between trusted and untrusted agents.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1191
(On-Chip Debug and Test Interface With Improper Access Control)
The chip does not implement or does not correctly perform access control to check whether users are authorized to access internal registers and test modes through the physical debug/test interface.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1234
(Hardware Internal or Debug Modes Allow Override of Locks)
System configuration protection may be bypassed during debug mode.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1247
(Improper Protection Against Voltage and Clock Glitches)
The device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1256
(Improper Restriction of Software Interfaces to Hardware Features)
The product provides software-controllable
device functionality for capabilities such as power and
clock management, but it does not properly limit
functionality that can lead to modification of
hardware memory or register bits, or the ability to
observe physical side channels.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1260
(Improper Handling of Overlap Between Protected Memory Ranges)
The product allows address regions to overlap, which can result in the bypassing of intended memory protection.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1262
(Improper Access Control for Register Interface)
The product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1300
(Improper Protection of Physical Side Channels)
The device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1421
(Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution)
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1423
(Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution)
Shared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights)
Weaknesses in this category were not included in the
2025 Most Important Hardware Weaknesses (MIHW) because they
did not have sufficient weakness data to support their
inclusion. However, they stand out as expert-driven
selections. Each of these weaknesses received high scores
from Subject Matter Experts, reflecting strong consensus
among those with deep domain knowledge.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights) >
1231
(Improper Prevention of Lock Bit Modification)
The product uses a trusted lock bit for restricting access to registers, address regions, or other resources, but the product does not prevent the value of the lock bit from being modified after it has been set.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights) >
1233
(Security-Sensitive Hardware Controls with Missing Lock Bit Protection)
The product uses a register lock bit protection mechanism, but it does not ensure that the lock bit prevents modification of system registers or controls that perform changes to important hardware system configuration.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights) >
1244
(Internal Asset Exposed to Unsafe Debug Access Level or State)
The product uses physical debug or test
interfaces with support for multiple access levels, but it
assigns the wrong debug access level to an internal asset,
providing unintended access to the asset from untrusted debug
agents.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights) >
1272
(Sensitive Information Uncleared Before Debug/Power State Transition)
The product performs a power or debug state transition, but it does not clear sensitive information that should no longer be accessible due to changes to information access restrictions.
![]() ![]()
1432
(Weaknesses in the 2025 CWE Most Important Hardware Weaknesses List) >
1433
(2025 MIHW Supplement: Expert Insights) >
1431
(Driving Intermediate Cryptographic State/Results to Hardware Module Outputs)
The product uses a hardware module implementing a cryptographic
algorithm that writes sensitive information about the intermediate
state or results of its cryptographic operations via one of its output
wires (typically the output port containing the final result).
View ComponentsA | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
CWE-1431: Driving Intermediate Cryptographic State/Results to Hardware Module Outputs
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a hardware module implementing a cryptographic
algorithm that writes sensitive information about the intermediate
state or results of its cryptographic operations via one of its output
wires (typically the output port containing the final result).
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The following SystemVerilog code is a crypto module that takes input data and encrypts it by processing the data through multiple encryption rounds. Note: this example is derived from [REF-1469]. (bad code)
Example Language: Verilog
01 | module crypto_core_with_leakage
02 | (
03 | input clk,
08 | );04 | input rst, 05 | input [127:0] data_i, 06 | output [127:0] data_o, 07 | output valid 09 | 10 | localparam int total_rounds = 10; 11 | logic [3:0] round_id_q; 12 | logic [127:0] data_state_q, data_state_d; 13 | logic [127:0] key_state_q, key_state_d; 14 | 15 | crypto_algo_round u_algo_round (
16 | .clk (clk),
23 | );17 | .rst (rst), 18 | .round_i (round_id_q ), 19 | .key_i (key_state_q ), 20 | .data_i (data_state_q), 21 | .key_o (key_state_d ), 22 | .data_o (data_state_d) 24 | 25 | always @(posedge clk) begin
26 | if (rst) begin
46 | end
27 | data_state_q <= 0;
30 | end28 | key_state_q <= 0; 29 | round_id_q <= 0; 31 | else begin
32 | case (round_id_q)
45 | end
33 | total_rounds: begin
44 | endcase
34 | data_state_q <= 0;
37 | end35 | key_state_q <= 0; 36 | round_id_q <= 0; 38 | 39 | default: begin
40 | data_state_q <= data_state_d;
43 | end41 | key_state_q <= key_state_d; 42 | round_id_q <= round_id_q + 1; 47 | 48 | assign valid = (round_id_q == total_rounds) ? 1'b1 : 1'b0; 49 | 50 | assign data_o = data_state_q; 51 | 52 | endmodule In line 50 above, data_state_q is assigned to data_o. Since data_state_q contains intermediate state/results, this allows an attacker to obtain these results through data_o.
In line 50 of the fixed logic below, while "data_state_q" does not contain the final result, a "sanitizing" mechanism drives a safe default value (i.e., 0) to "data_o" instead of the value of "data_state_q". In doing so, the mechanism prevents the exposure of intermediate state/results which could be used to break soundness of the cryptographic operation being performed. A real-world example of this weakness and mitigation can be seen in a pull request that was submitted to the OpenTitan Github repository [REF-1469]. (good code)
Example Language: Verilog
01 | module crypto_core_without_leakage
02 | (
03 | input clk,
09 |04 | input rst, 05 | input [127:0] data_i, 06 | output [127:0] data_o, 07 | output valid 08 | ); 10 | localparam int total_rounds = 10; 11 | logic [3:0] round_id_q; 12 | logic [127:0] data_state_q, data_state_d; 13 | logic [127:0] key_state_q, key_state_d; 14 | 15 | crypto_algo_round u_algo_round (
16 | .clk (clk),
23 | );17 | .rst (rst), 18 | .round_i (round_id_q ), 19 | .key_i (key_state_q ), 20 | .data_i (data_state_q), 21 | .key_o (key_state_d ), 22 | .data_o (data_state_d) 24 | 25 | always @(posedge clk) begin
26 | if (rst) begin
46 | end
27 | data_state_q <= 0;
30 | end28 | key_state_q <= 0; 29 | round_id_q <= 0; 31 | else begin
32 | case (round_id_q)
45 | end
33 | total_rounds: begin
44 | endcase
34 | data_state_q <= 0;
37 | end35 | key_state_q <= 0; 36 | round_id_q <= 0; 38 | 39 | default: begin
40 | data_state_q <= data_state_d;
43 | end41 | key_state_q <= key_state_d; 42 | round_id_q <= round_id_q + 1; 47 | 48 | assign valid = (round_id_q == total_rounds) ? 1'b1 : 1'b0; 49 | 50 | assign data_o = (valid) ? data_state_q : 0; 51 | 52 | endmodule
![]()
CWE-1423: Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterShared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components. When separate software components (for example, two processes) share microarchitectural predictor state across a hardware boundary, code in one component may be able to influence microarchitectural predictor behavior in another component. If the predictor can cause transient execution, the shared predictor state may allow an attacker to influence transient execution in a victim, and in a manner that could allow the attacker to infer private data from the victim by monitoring observable discrepancies (CWE-203) in a covert channel [REF-1400]. Predictor state may be shared when the processor transitions from one component to another (for example, when a process makes a system call to enter the kernel). Many commodity processors have features which prevent microarchitectural predictions that occur before a boundary from influencing predictions that occur after the boundary. Predictor state may also be shared between hardware threads, for example, sibling hardware threads on a processor that supports simultaneous multithreading (SMT). This sharing may be benign if the hardware threads are simultaneously executing in the same software component, or it could expose a weakness if one sibling is a malicious software component, and the other sibling is a victim software component. Processors that share microarchitectural predictors between hardware threads may have features which prevent microarchitectural predictions that occur on one hardware thread from influencing predictions that occur on another hardware thread. Features that restrict predictor state sharing across transitions or between hardware threads may be always-on, on by default, or may require opt-in from software. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Branch Target Injection (BTI) is a vulnerability that can allow an SMT hardware thread to maliciously train the indirect branch predictor state that is shared with its sibling hardware thread. A cross-thread BTI attack requires the attacker to find a vulnerable code sequence within the victim software. For example, the authors of [REF-1415] identified the following code sequence in the Windows library ntdll.dll: (bad code)
Example Language: x86 Assembly
adc edi,dword ptr [ebx+edx+13BE13BDh]
indirect_branch_site:adc dl,byte ptr [edi] ...
jmp dword ptr [rsi] # at this point attacker knows edx, controls edi and ebx
To successfully exploit this code sequence to disclose the victim's private data, the attacker must also be able to find an indirect branch site within the victim, where the attacker controls the values in edi and ebx, and the attacker knows the value in edx as shown above at the indirect branch site. A proof-of-concept cross-thread BTI attack might proceed as follows:
Example 2 BTI can also allow software in one execution context to maliciously train branch predictor entries that can be used in another context. For example, on some processors user-mode software may be able to train predictor entries that can also be used after transitioning into kernel mode, such as after invoking a system call. This vulnerability does not necessarily require SMT and may instead be performed in synchronous steps, though it does require the attacker to find an exploitable code sequence in the victim's code, for example, in the kernel. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1421: Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom Filter
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components. Many commodity processors also share microarchitectural resources that cache (temporarily store) data, which may be confidential. These resources may be shared across processor contexts, including across SMT threads, privilege rings, or others. When transient operations allow access to ISA-protected data in a shared microarchitectural resource, this might violate users' expectations of the ISA feature that is bypassed. For example, if transient operations can access a victim's private data in a shared microarchitectural resource, then the operations' microarchitectural side effects may correspond to the accessed data. If an attacker can trigger these transient operations and observe their side effects through a covert channel [REF-1400], then the attacker may be able to infer the victim's private data. Private data could include sensitive program data, OS/VMM data, page table data (such as memory addresses), system configuration data (see Demonstrative Example 3), or any other data that the attacker does not have the required privileges to access. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Some processors may perform access control checks in parallel with memory read/write operations. For example, when a user-mode program attempts to read data from memory, the processor may also need to check whether the memory address is mapped into user space or kernel space. If the processor performs the access concurrently with the check, then the access may be able to transiently read kernel data before the check completes. This race condition is demonstrated in the following code snippet from [REF-1408], with additional annotations: (bad code)
Example Language: x86 Assembly
1 ; rcx = kernel address, rbx = probe array
2 xor rax, rax # set rax to 0 3 retry: 4 mov al, byte [rcx] # attempt to read kernel memory 5 shl rax, 0xc # multiply result by page size (4KB) 6 jz retry # if the result is zero, try again 7 mov rbx, qword [rbx + rax] # transmit result over a cache covert channel Vulnerable processors may return kernel data from a shared microarchitectural resource in line 4, for example, from the processor's L1 data cache. Since this vulnerability involves a race condition, the mov in line 4 may not always return kernel data (that is, whenever the check "wins" the race), in which case this demonstration code re-attempts the access in line 6. The accessed data is multiplied by 4KB, a common page size, to make it easier to observe via a cache covert channel after the transmission in line 7. The use of cache covert channels to observe the side effects of transient execution has been described in [REF-1408]. Example 2 Many commodity processors share microarchitectural fill buffers between sibling hardware threads on simultaneous multithreaded (SMT) processors. Fill buffers can serve as temporary storage for data that passes to and from the processor's caches. Microarchitectural Fill Buffer Data Sampling (MFBDS) is a vulnerability that can allow a hardware thread to access its sibling's private data in a shared fill buffer. The access may be prohibited by the processor's ISA, but MFBDS can allow the access to occur during transient execution, in particular during a faulting operation or an operation that triggers a microcode assist. More information on MFBDS can be found in [REF-1405] and [REF-1409]. Example 3 Some processors may allow access to system registers (for example, system coprocessor registers or model-specific registers) during transient execution. This scenario is depicted in the code snippet below. Under ordinary operating circumstances, code in exception level 0 (EL0) is not permitted to access registers that are restricted to EL1, such as TTBR0_EL1. However, on some processors an earlier mis-prediction can cause the MRS instruction to transiently read the value in an EL1 register. In this example, a conditional branch (line 2) can be mis-predicted as "not taken" while waiting for a slow load (line 1). This allows MRS (line 3) to transiently read the value in the TTBR0_EL1 register. The subsequent memory access (line 6) can allow the restricted register's value to become observable, for example, over a cache covert channel. Code snippet is from [REF-1410]. See also [REF-1411]. (bad code)
Example Language: x86 Assembly
1 LDR X1, [X2] ; arranged to miss in the cache 2 CBZ X1, over ; This will be taken 3 MRS X3, TTBR0_EL1; 4 LSL X3, X3, #imm 5 AND X3, X3, #0xFC0 6 LDR X5, [X6,X3] ; X6 is an EL0 base address 7 over Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1234: Hardware Internal or Debug Modes Allow Override of Locks
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDevice configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification. This is commonly implemented using a trusted lock bit, which when set, disables writes to a protected set of registers or address regions. The lock protection is intended to prevent modification of certain system configuration (e.g., memory/memory protection unit configuration). If debug features supported by hardware or internal modes/system states are supported in the hardware design, modification of the lock protection may be allowed allowing access and modification of configuration information. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
For example, consider the example Locked_override_register example. This register module supports a lock mode that blocks any writes after lock is set to 1.
(bad code)
Example Language: Verilog
module Locked_register_example
( input [15:0] Data_in, input Clk, input resetn, input write, input Lock, input scan_mode, input debug_unlocked, output reg [15:0] Data_out ); reg lock_status; always @(posedge Clk or negedge resetn)
if (~resetn) // Register is reset resetn
always @(posedge Clk or negedge resetn)begin
lock_status <= 1'b0;
endelse if (Lock) begin
lock_status <= 1'b1;
endelse if (~Lock) begin
lock_status <= lock_status
end
if (~resetn) // Register is reset resetn
endmodulebegin
Data_out <= 16'h0000;
endelse if (write & (~lock_status | scan_mode | debug_unlocked) ) // Register protected by Lock bit input, overrides supported for scan_mode & debug_unlocked begin
Data_out <= Data_in;
endelse if (~write) begin
Data_out <= Data_out;
endIf either the scan_mode or the debug_unlocked modes can be triggered by software, then the lock protection may be bypassed. (good code)
Either remove the debug and scan mode overrides or protect enabling of these modes so that only trusted and authorized users may enable these modes.
Example 2 The following example code [REF-1375] is taken from the register lock security peripheral of the HACK@DAC'21 buggy OpenPiton SoC. It demonstrates how to lock read or write access to security-critical hardware registers (e.g., crypto keys, system integrity code, etc.). The configuration to lock all the sensitive registers in the SoC is managed through the reglk_mem registers. These reglk_mem registers are reset when the hardware powers up and configured during boot up. Malicious users, even with kernel-level software privilege, do not get access to the sensitive contents that are locked down. Hence, the security of the entire system can potentially be compromised if the register lock configurations are corrupted or if the register locks are disabled. (bad code)
Example Language: Verilog
...
always @(posedge clk_i)
begin
...
if(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'h0;
endThe example code [REF-1375] illustrates an instance of a vulnerable implementation of register locks in the SoC. In this flawed implementation [REF-1375], the reglk_mem registers are also being reset when the system enters debug mode (indicated by the jtag_unlock signal). Consequently, users can simply put the processor in debug mode to access sensitive contents that are supposed to be protected by the register lock feature. This can be mitigated by excluding debug mode signals from the reset logic of security-critical register locks as demonstrated in the following code snippet [REF-1376]. (good code)
Example Language: Verilog
...
always @(posedge clk_i)
begin
...
if(~(rst_ni && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'h0;
end![]()
CWE-1262: Improper Access Control for Register Interface
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers.
Software commonly accesses peripherals in a System-on-Chip (SoC) or other device through a memory-mapped register interface. Malicious software could tamper with any security-critical hardware data that is accessible directly or indirectly through the register interface, which could lead to a loss of confidentiality and integrity. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The register interface provides software access to hardware functionality. This functionality is an attack surface. This attack surface may be used to run untrusted code on the system through the register interface. As an example, cryptographic accelerators require a mechanism for software to select modes of operation and to provide plaintext or ciphertext data to be encrypted or decrypted as well as other functions. This functionality is commonly provided through registers. (bad code)
Cryptographic key material stored in registers inside the cryptographic accelerator can be accessed by software.
(good code)
Key material stored in registers should never be accessible to software. Even if software can provide a key, all read-back paths to software should be disabled.
Example 2 The example code is taken from the Control/Status Register (CSR) module inside the processor core of the HACK@DAC'19 buggy CVA6 SoC [REF-1340]. In RISC-V ISA [REF-1341], the CSR file contains different sets of registers with different privilege levels, e.g., user mode (U), supervisor mode (S), hypervisor mode (H), machine mode (M), and debug mode (D), with different read-write policies, read-only (RO) and read-write (RW). For example, machine mode, which is the highest privilege mode in a RISC-V system, registers should not be accessible in user, supervisor, or hypervisor modes. (bad code)
Example Language: Verilog
if (csr_we || csr_read) begin
if ((riscv::priv_lvl_t'(priv_lvl_o & csr_addr.csr_decode.priv_lvl) != csr_addr.csr_decode.priv_lvl) && !(csr_addr.address==riscv::CSR_MEPC)) begin
end
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
endcsr_exception_o.valid = 1'b1; // check access to debug mode only CSRs if (csr_addr_i[11:4] == 8'h7b && !debug_mode_q) begin
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
end
csr_exception_o.valid = 1'b1; The vulnerable example code allows the machine exception program counter (MEPC) register to be accessed from a user mode program by excluding the MEPC from the access control check. MEPC as per the RISC-V specification can be only written or read by machine mode code. Thus, the attacker in the user mode can run code in machine mode privilege (privilege escalation). To mitigate the issue, fix the privilege check so that it throws an Illegal Instruction Exception for user mode accesses to the MEPC register. [REF-1345] (good code)
Example Language: Verilog
if (csr_we || csr_read) begin
if ((riscv::priv_lvl_t'(priv_lvl_o & csr_addr.csr_decode.priv_lvl) != csr_addr.csr_decode.priv_lvl)) begin
end
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
endcsr_exception_o.valid = 1'b1; // check access to debug mode only CSRs if (csr_addr_i[11:4] == 8'h7b && !debug_mode_q) begin
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
end
csr_exception_o.valid = 1'b1; Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1260: Improper Handling of Overlap Between Protected Memory Ranges
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product allows address regions to overlap, which can result in the bypassing of intended memory protection.
Isolated memory regions and access control (read/write) policies are used by hardware to protect privileged software. Software components are often allowed to change or remap memory region definitions in order to enable flexible and dynamically changeable memory management by system software. If a software component running at lower privilege can program a memory address region to overlap with other memory regions used by software running at higher privilege, privilege escalation may be available to attackers. The memory protection unit (MPU) logic can incorrectly handle such an address overlap and allow the lower-privilege software to read or write into the protected memory region, resulting in privilege escalation attack. An address overlap weakness can also be used to launch a denial of service attack on the higher-privilege software memory regions. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
For example, consider a design with a 16-bit address that has two software privilege levels: Privileged_SW and Non_privileged_SW. To isolate the system memory regions accessible by these two privilege levels, the design supports three memory regions: Region_0, Region_1, and Region_2. Each region is defined by two 32 bit registers: its range and its access policy.
Certain bits of the access policy are defined symbolically as follows:
For any requests from software, an address-protection filter checks the address range and access policies for each of the three regions, and only allows software access if all three filters allow access. Consider the following goals for access control as intended by the designer:
The intention is that Non_privileged_SW cannot modify memory region and policies defined by Privileged_SW in Region_0 and Region_1. Thus, it cannot read or write the memory regions that Privileged_SW is using. (bad code)
Non_privileged_SW can program the Address_range register for Region_2 so that its address overlaps with the ranges defined by Region_0 or Region_1. Using this capability, it is possible for Non_privileged_SW to block any memory region from being accessed by Privileged_SW, i.e., Region_0 and Region_1. This design could be improved in several ways. (good code)
Ensure that software accesses to memory regions are only permitted if all three filters permit access. Additionally, the scheme could define a memory region priority to ensure that Region_2 (the memory region defined by Non_privileged_SW) cannot overlap Region_0 or Region_1 (which are used by Privileged_SW).
Example 2 The example code below is taken from the IOMMU controller module of the HACK@DAC'19 buggy CVA6 SoC [REF-1338]. The static memory map is composed of a set of Memory-Mapped Input/Output (MMIO) regions covering different IP agents within the SoC. Each region is defined by two 64-bit variables representing the base address and size of the memory region (XXXBase and XXXLength). In this example, we have 12 IP agents, and only 4 of them are called out for illustration purposes in the code snippets. Access to the AES IP MMIO region is considered privileged as it provides access to AES secret key, internal states, or decrypted data. (bad code)
Example Language: Verilog
...
localparam logic[63:0] PLICLength = 64'h03FF_FFFF;
localparam logic[63:0] UARTLength = 64'h0011_1000; localparam logic[63:0] AESLength = 64'h0000_1000; localparam logic[63:0] SPILength = 64'h0080_0000; ...
typedef enum logic [63:0] {
...
PLICBase = 64'h0C00_0000, UARTBase = 64'h1000_0000, AESBase = 64'h1010_0000, SPIBase = 64'h2000_0000, ... The vulnerable code allows the overlap between the protected MMIO region of the AES peripheral and the unprotected UART MMIO region. As a result, unprivileged users can access the protected region of the AES IP. In the given vulnerable example UART MMIO region starts at address 64'h1000_0000 and ends at address 64'h1011_1000 (UARTBase is 64'h1000_0000, and the size of the region is provided by the UARTLength of 64'h0011_1000). On the other hand, the AES MMIO region starts at address 64'h1010_0000 and ends at address 64'h1010_1000, which implies an overlap between the two peripherals' memory regions. Thus, any user with access to the UART can read or write the AES MMIO region, e.g., the AES secret key. To mitigate this issue, remove the overlapping address regions by decreasing the size of the UART memory region or adjusting memory bases for all the remaining peripherals. [REF-1339] (good code)
Example Language: Verilog
...
localparam logic[63:0] PLICLength = 64'h03FF_FFFF;
localparam logic[63:0] UARTLength = 64'h0000_1000; localparam logic[63:0] AESLength = 64'h0000_1000; localparam logic[63:0] SPILength = 64'h0080_0000; ...
typedef enum logic [63:0] {
...
PLICBase = 64'h0C00_0000, UARTBase = 64'h1000_0000, AESBase = 64'h1010_0000, SPIBase = 64'h2000_0000, ... Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1189: Improper Isolation of Shared Resources on System-on-a-Chip (SoC)
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe System-On-a-Chip (SoC) does not properly isolate shared resources between trusted and untrusted agents.
A System-On-a-Chip (SoC) has a lot of functionality, but it may have a limited number of pins or pads. A pin can only perform one function at a time. However, it can be configured to perform multiple different functions. This technique is called pin multiplexing. Similarly, several resources on the chip may be shared to multiplex and support different features or functions. When such resources are shared between trusted and untrusted agents, untrusted agents may be able to access the assets intended to be accessed only by the trusted agents. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the following SoC design. The Hardware Root of Trust (HRoT) local SRAM is memory mapped in the core{0-N} address space. The HRoT allows or disallows access to private memory ranges, thus allowing the sram to function as a mailbox for communication between untrusted and trusted HRoT partitions. We assume that the threat is from malicious software in the untrusted domain. We assume this software has access to the core{0-N} memory map and can be running at any privilege level on the untrusted cores. The capability of this threat in this example is communication to and from the mailbox region of SRAM modulated by the hrot_iface. To address this threat, information must not enter or exit the shared region of SRAM through hrot_iface when in secure or privileged mode. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1231: Improper Prevention of Lock Bit Modification
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a trusted lock bit for restricting access to registers, address regions, or other resources, but the product does not prevent the value of the lock bit from being modified after it has been set.
In integrated circuits and hardware intellectual property (IP) cores, device configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification. This behavior is commonly implemented using a trusted lock bit. When set, the lock bit disables writes to a protected set of registers or address regions. Design or coding errors in the implementation of the lock bit protection feature may allow the lock bit to be modified or cleared by software after it has been set. Attackers might be able to unlock the system and features that the bit is intended to protect. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the example design below for a digital thermal sensor that detects overheating of the silicon and triggers system shutdown. The system critical temperature limit (CRITICAL_TEMP_LIMIT) and thermal sensor calibration (TEMP_SENSOR_CALIB) data have to be programmed by firmware, and then the register needs to be locked (TEMP_SENSOR_LOCK). (bad code)
Example Language: Other
In this example, note that if the system heats to critical temperature, the response of the system is controlled by the TEMP_HW_SHUTDOWN bit [1], which is not lockable. Thus, the intended security property of the critical temperature sensor cannot be fully protected, since software can misconfigure the TEMP_HW_SHUTDOWN register even after the lock bit is set to disable the shutdown response. (good code)
Example Language: Other
To fix this weakness, one could change the TEMP_HW_SHUTDOWN field to be locked by TEMP_SENSOR_LOCK.
Example 2 The following example code is a snippet from the register locks inside the buggy OpenPiton SoC of HACK@DAC'21 [REF-1350]. Register locks help prevent SoC peripherals' registers from malicious use of resources. The registers that can potentially leak secret data are locked by register locks. In the vulnerable code, the reglk_mem is used for locking information. If one of its bits toggle to 1, the corresponding peripheral's registers will be locked. In the context of the HACK@DAC System-on-Chip (SoC), it is pertinent to note the existence of two distinct categories of reset signals. First, there is a global reset signal denoted as "rst_ni," which possesses the capability to simultaneously reset all peripherals to their respective initial states. Second, we have peripheral-specific reset signals, such as "rst_9," which exclusively reset individual peripherals back to their initial states. The administration of these reset signals is the responsibility of the reset controller module. (bad code)
Example Language: Verilog
always @(posedge clk_i)
begin
endif(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
endreglk_mem[j] <= 'h0;
... In the buggy SoC architecture during HACK@DAC'21, a critical issue arises within the reset controller module. Specifically, the reset controller can inadvertently transmit a peripheral reset signal to the register lock within the user privilege domain. This unintentional action can result in the reset of the register locks, potentially exposing private data from all other peripherals, rendering them accessible and readable. To mitigate the issue, remove the extra reset signal rst_9 from the register lock if condition. [REF-1351] (good code)
Example Language: Verilog
always @(posedge clk_i)
begin
endif(~(rst_ni && ~jtag_unlock))
begin
for (j=0; j < 6; j=j+1) begin
endreglk_mem[j] <= 'h0;
... Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1247: Improper Protection Against Voltage and Clock Glitches
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device.
A device might support features such as secure boot which are supplemented with hardware and firmware support. This involves establishing a chain of trust, starting with an immutable root of trust by checking the signature of the next stage (culminating with the OS and runtime software) against a golden value before transferring control. The intermediate stages typically set up the system in a secure state by configuring several access control settings. Similarly, security logic for exercising a debug or testing interface may be implemented in hardware, firmware, or both. A device needs to guard against fault attacks such as voltage glitches and clock glitches that an attacker may employ in an attempt to compromise the system. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Below is a representative snippet of C code that is part of the secure-boot flow. A signature of the runtime-firmware image is calculated and compared against a golden value. If the signatures match, the bootloader loads runtime firmware. If there is no match, an error halt occurs. If the underlying hardware executing this code does not contain any circuitry or sensors to detect voltage or clock glitches, an attacker might launch a fault-injection attack right when the signature check is happening (at the location marked with the comment), causing a bypass of the signature-checking process. (bad code)
Example Language: C
...
if (signature_matches) // <-Glitch Here {
load_runtime_firmware();
}else {
do_not_load_runtime_firmware();
}... After bypassing secure boot, an attacker can gain access to system assets to which the attacker should not have access. (good code)
If the underlying hardware detects a voltage or clock glitch, the information can be used to prevent the glitch from being successful.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1300: Improper Protection of Physical Side Channels
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions.
An adversary could monitor and measure physical phenomena to detect patterns and make inferences, even if it is not possible to extract the information in the digital domain. Physical side channels have been well-studied for decades in the context of breaking implementations of cryptographic algorithms or other attacks against security features. These side channels may be easily observed by an adversary with physical access to the device, or using a tool that is in close proximity. If the adversary can monitor hardware operation and correlate its data processing with power, EME, and acoustic measurements, the adversary might be able to recover of secret keys and data. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider a device that checks a passcode to unlock the screen. (bad code)
Example Language: Other
As each character of
the PIN number is entered, a correct character
exhibits one current pulse shape while an
incorrect character exhibits a different current
pulse shape.
PIN numbers used to unlock a cell phone should not exhibit any characteristics about themselves. This creates a side channel. An attacker could monitor the pulses using an oscilloscope or other method. Once the first character is correctly guessed (based on the oscilloscope readings), they can then move to the next character, which is much more efficient than the brute force method of guessing every possible sequence of characters. (good code)
Example Language: Other
Rather than comparing
each character to the correct PIN value as it is
entered, the device could accumulate the PIN in a
register, and do the comparison all at once at the
end. Alternatively, the components for the
comparison could be modified so that the current
pulse shape is the same regardless of the
correctness of the entered
character.
Example 2 Consider the device vulnerability CVE-2021-3011, which affects certain microcontrollers [REF-1221]. The Google Titan Security Key is used for two-factor authentication using cryptographic algorithms. The device uses an internal secret key for this purpose and exchanges information based on this key for the authentication. If this internal secret key and the encryption algorithm were known to an adversary, the key function could be duplicated, allowing the adversary to masquerade as the legitimate user. (bad code)
Example Language: Other
The local method of extracting the secret key consists of plugging the key into a USB port and using electromagnetic (EM) sniffing tools and computers.
(good code)
Example Language: Other
Several solutions could have been considered by the manufacturer. For example, the manufacturer could shield the circuitry in the key or add randomized delays, indirect calculations with random values involved, or randomly ordered calculations to make extraction much more difficult. The manufacturer could use a combination of these techniques.
Example 3 The code snippet provided here is part of the modular exponentiation module found in the HACK@DAC'21 Openpiton System-on-Chip (SoC), specifically within the RSA peripheral [REF-1368]. Modular exponentiation, denoted as "a^b mod n," is a crucial operation in the RSA public/private key encryption. In RSA encryption, where 'c' represents ciphertext, 'm' stands for a message, and 'd' corresponds to the private key, the decryption process is carried out using this modular exponentiation as follows: m = c^d mod n, where 'n' is the result of multiplying two large prime numbers. (bad code)
Example Language: Verilog
...
module mod_exp
...
endmodule`UPDATE: begin
if (exponent_reg != 'd0) begin
...
if (exponent_reg[0])
result_reg <= result_next;
base_reg <= base_next;exponent_reg <= exponent_next; state <= `UPDATE; The vulnerable code shows a buggy implementation of binary exponentiation where it updates the result register (result_reg) only when the corresponding exponent bit (exponent_reg[0]) is set to 1. However, when this exponent bit is 0, the output register is not updated. It's important to note that this implementation introduces a physical power side-channel vulnerability within the RSA core. This vulnerability could expose the private exponent to a determined physical attacker. Such exposure of the private exponent could lead to a complete compromise of the private key. To address mitigation requirements, the developer can develop the module by minimizing dependency on conditions, particularly those reliant on secret keys. In situations where branching is unavoidable, developers can implement masking mechanisms to obfuscate the power consumption patterns exhibited by the module (see good code example). Additionally, certain algorithms, such as the Karatsuba algorithm, can be implemented as illustrative examples of side-channel resistant algorithms, as they necessitate only a limited number of branch conditions [REF-1369]. (good code)
Example Language: Verilog
...
module mod_exp
...
endmodule`UPDATE: begin
if (exponent_reg != 'd0) begin
...
if (exponent_reg[0]) begin
result_reg <= result_next;
end else begin
mask_reg <= result_next;
endbase_reg <= base_next; exponent_reg <= exponent_next; state <= `UPDATE; Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1256: Improper Restriction of Software Interfaces to Hardware Features
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product provides software-controllable
device functionality for capabilities such as power and
clock management, but it does not properly limit
functionality that can lead to modification of
hardware memory or register bits, or the ability to
observe physical side channels.
It is frequently assumed that physical attacks such as fault injection and side-channel analysis require an attacker to have physical access to the target device. This assumption may be false if the device has improperly secured power management features, or similar features. For mobile devices, minimizing power consumption is critical, but these devices run a wide variety of applications with different performance requirements. Software-controllable mechanisms to dynamically scale device voltage and frequency and monitor power consumption are common features in today's chipsets, but they also enable attackers to mount fault injection and side-channel attacks without having physical access to the device. Fault injection attacks involve strategic manipulation of bits in a device to achieve a desired effect such as skipping an authentication step, elevating privileges, or altering the output of a cryptographic operation. Manipulation of the device clock and voltage supply is a well-known technique to inject faults and is cheap to implement with physical device access. Poorly protected power management features allow these attacks to be performed from software. Other features, such as the ability to write repeatedly to DRAM at a rapid rate from unprivileged software, can result in bit flips in other memory locations (Rowhammer, [REF-1083]). Side channel analysis requires gathering measurement traces of physical quantities such as power consumption. Modern processors often include power metering capabilities in the hardware itself (e.g., Intel RAPL) which if not adequately protected enable attackers to gather measurements necessary for performing side-channel attacks from software. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example considers the Rowhammer problem [REF-1083]. The Rowhammer issue was caused by a program in a tight loop writing repeatedly to a location to which the program was allowed to write but causing an adjacent memory location value to change. (bad code)
Example Language: Other
Continuously writing the same value to the same address causes the value of an adjacent location to change value.
Preventing the loop required to defeat the Rowhammer exploit is not always possible: (good code)
Example Language: Other
Redesign the RAM devices to reduce inter capacitive coupling making the Rowhammer exploit impossible.
While the redesign may be possible for new devices, a redesign is not possible in existing devices. There is also the possibility that reducing capacitance with a relayout would impact the density of the device resulting in a less capable, more costly device. Example 2 Suppose a hardware design implements a set of software-accessible registers for scaling clock frequency and voltage but does not control access to these registers. Attackers may cause register and memory changes and race conditions by changing the clock or voltage of the device under their control. Example 3 Consider the following SoC design. Security-critical settings for scaling clock frequency and voltage are available in a range of registers bounded by [PRIV_END_ADDR : PRIV_START_ADDR] in the tmcu.csr module in the HW Root of Trust. These values are writable based on the lock_bit register in the same module. The lock_bit is only writable by privileged software running on the tmcu. We assume that untrusted software running on any of the Core{0-N} processors has access to the input and output ports of the hrot_iface. If untrusted software can clear the lock_bit or write the clock frequency and voltage registers due to inadequate protection, a fault injection attack could be performed. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1244: Internal Asset Exposed to Unsafe Debug Access Level or State
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses physical debug or test
interfaces with support for multiple access levels, but it
assigns the wrong debug access level to an internal asset,
providing unintended access to the asset from untrusted debug
agents.
Debug authorization can have multiple levels of access, defined such that different system internal assets are accessible based on the current authorized debug level. Other than debugger authentication (e.g., using passwords or challenges), the authorization can also be based on the system state or boot stage. For example, full system debug access might only be allowed early in boot after a system reset to ensure that previous session data is not accessible to the authenticated debugger. If this protection mechanism does not ensure that internal assets have the correct debug access level during each boot stage or change in system state, an attacker could obtain sensitive information from the internal asset using a debugger. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The JTAG interface is used to perform debugging and provide CPU core access for developers. JTAG-access protection is implemented as part of the JTAG_SHIELD bit in the hw_digctl_ctrl register. This register has no default value at power up and is set only after the system boots from ROM and control is transferred to the user software. (bad code)
Example Language: Other
This means that since the end user has access to JTAG at system reset and during ROM code execution before control is transferred to user software, a JTAG user can modify the boot flow and subsequently disclose all CPU information, including data-encryption keys. (informative)
The default value of this register bit should be set to 1 to prevent the JTAG from being enabled at system reset.
Example 2 The example code below is taken from the CVA6 processor core of the HACK@DAC'21 buggy OpenPiton SoC. Debug access allows users to access internal hardware registers that are otherwise not exposed for user access or restricted access through access control protocols. Hence, requests to enter debug mode are checked and authorized only if the processor has sufficient privileges. In addition, debug accesses are also locked behind password checkers. Thus, the processor enters debug mode only when the privilege level requirement is met, and the correct debug password is provided. The following code [REF-1377] illustrates an instance of a vulnerable implementation of debug mode. The core correctly checks if the debug requests have sufficient privileges and enables the debug_mode_d and debug_mode_q signals. It also correctly checks for debug password and enables umode_i signal. (bad code)
Example Language: Verilog
module csr_regfile #(
...
// check that we actually want to enter debug depending on the privilege level we are currently in
...unique case (priv_lvl_o)
riscv::PRIV_LVL_M: begin
debug_mode_d = dcsr_q.ebreakm;
riscv::PRIV_LVL_U: begin
debug_mode_d = dcsr_q.ebreaku;
assign priv_lvl_o = (debug_mode_q || umode_i) ? riscv::PRIV_LVL_M : priv_lvl_q;
...
debug_mode_q <= debug_mode_d;
...However, it grants debug access and changes the privilege level, priv_lvl_o, even when one of the two checks is satisfied and the other is not. Because of this, debug access can be granted by simply requesting with sufficient privileges (i.e., debug_mode_q is enabled) and failing the password check (i.e., umode_i is disabled). This allows an attacker to bypass the debug password checking and gain debug access to the core, compromising the security of the processor. A fix to this issue is to only change the privilege level of the processor when both checks are satisfied, i.e., the request has enough privileges (i.e., debug_mode_q is enabled) and the password checking is successful (i.e., umode_i is enabled) [REF-1378]. (good code)
Example Language: Verilog
module csr_regfile #(
...
// check that we actually want to enter debug depending on the privilege level we are currently in
...unique case (priv_lvl_o)
riscv::PRIV_LVL_M: begin
debug_mode_d = dcsr_q.ebreakm;
riscv::PRIV_LVL_U: begin
debug_mode_d = dcsr_q.ebreaku;
assign priv_lvl_o = (debug_mode_q && umode_i) ? riscv::PRIV_LVL_M : priv_lvl_q;
...
debug_mode_q <= debug_mode_d;
...Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Relationship
CWE-1191 and CWE-1244 both involve physical debug access,
but the weaknesses are different. CWE-1191 is effectively
about missing authorization for a debug interface,
i.e. JTAG. CWE-1244 is about providing internal assets with
the wrong debug access level, exposing the asset to
untrusted debug agents.
CWE-1191: On-Chip Debug and Test Interface With Improper Access Control
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe chip does not implement or does not correctly perform access control to check whether users are authorized to access internal registers and test modes through the physical debug/test interface.
A device's internal information may be accessed through a scan chain of interconnected internal registers, usually through a JTAG interface. The JTAG interface provides access to these registers in a serial fashion in the form of a scan chain for the purposes of debugging programs running on a device. Since almost all information contained within a device may be accessed over this interface, device manufacturers typically insert some form of authentication and authorization to prevent unintended use of this sensitive information. This mechanism is implemented in addition to on-chip protections that are already present. If authorization, authentication, or some other form of access control is not implemented or not implemented correctly, a user may be able to bypass on-chip protection mechanisms through the debug interface. Sometimes, designers choose not to expose the debug pins on the motherboard. Instead, they choose to hide these pins in the intermediate layers of the board. This is primarily done to work around the lack of debug authorization inside the chip. In such a scenario (without debug authorization), when the debug interface is exposed, chip internals are accessible to an attacker. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A home, WiFi-router device implements a login prompt which prevents an unauthorized user from issuing any commands on the device until appropriate credentials are provided. The credentials are protected on the device and are checked for strength against attack. (bad code)
Example Language: Other
If the JTAG interface on this device is not hidden by the manufacturer, the interface may be identified using tools such as JTAGulator. If it is hidden but not disabled, it can be exposed by physically wiring to the board. By issuing a "halt" command before the OS starts, the unauthorized user pauses the watchdog timer and prevents the router from restarting (once the watchdog timer would have expired). Having paused the router, an unauthorized user is able to execute code and inspect and modify data in the device, even extracting all of the router's firmware. This allows the user to examine the router and potentially exploit it. JTAG is useful to chip and device manufacturers during design, testing, and production and is included in nearly every product. Without proper authentication and authorization, the interface may allow tampering with a product. (good code)
Example Language: Other
In order to prevent exposing the debugging interface, manufacturers might try to obfuscate the JTAG interface or blow device internal fuses to disable the JTAG interface. Adding authentication and authorization to this interface makes use by unauthorized individuals much more difficult.
Example 2 The following example code is a snippet from the JTAG wrapper module in the RISC-V debug module of the HACK@DAC'21 Openpiton SoC [REF-1355]. To make sure that the JTAG is accessed securely, the developers have included a primary authentication mechanism based on a password. The developers employed a Finite State Machine (FSM) to implement this authentication. When a user intends to read from or write to the JTAG module, they must input a password. In the subsequent state of the FSM module, the entered password undergoes Hash-based Message Authentication Code (HMAC) calculation using an internal HMAC submodule. Once the HMAC for the entered password is computed by the HMAC submodule, the FSM transitions to the next state, where it compares the computed HMAC with the expected HMAC for the password. If the computed HMAC matches the expected HMAC, the FSM grants the user permission to perform read or write operations on the JTAG module. [REF-1352] (bad code)
Example Language: Verilog
...
PassChkValid: begin
...if(hashValid) begin
endif(exp_hash == pass_hash) begin
end else begin
pass_check = 1'b1;
end else begin
pass_check = 1'b0;
endstate_d = Idle; state_d = PassChkValid;
endHowever, in the given vulnerable part of the code, the JTAG module has not defined a limitation for several continuous wrong password attempts. This omission poses a significant security risk, allowing attackers to carry out brute-force attacks without restrictions. Without a limitation on wrong password attempts, an attacker can repeatedly guess different passwords until they gain unauthorized access to the JTAG module. This leads to various malicious activities, such as unauthorized read from or write to debug module interface. To mitigate the mentioned vulnerability, developers need to implement a restriction on the number of consecutive incorrect password attempts allowed by the JTAG module, which can achieve by incorporating a mechanism that temporarily locks the module after a certain number of failed attempts.[REF-1353][REF-1354] (good code)
Example Language: Verilog
...
case (state_q) Idle: begin
... else if ( (dm::dtm_op_e'(dmi.op) == dm::DTM_PASS) && (miss_pass_check_cnt_q != 2'b11) )
...begin state_d = Write;
endpass_mode = 1'b1; end ... PassChkValid: begin
...if(hashValid) begin
endif(exp_hash == pass_hash) begin
end else begin
pass_check = 1'b1;
end else begin
pass_check = 1'b0;
endmiss_pass_check_cnt_d = miss_pass_check_cnt_q + 1 state_d = Idle; state_d = PassChkValid;
endExample 3 The example code below is taken from the JTAG access control mechanism of the HACK@DAC'21 buggy OpenPiton SoC [REF-1364]. Access to JTAG allows users to access sensitive information in the system. Hence, access to JTAG is controlled using cryptographic authentication of the users. In this example (see the vulnerable code source), the password checker uses HMAC-SHA256 for authentication. It takes a 512-bit secret message from the user, hashes it using HMAC, and compares its output with the expected output to determine the authenticity of the user. (bad code)
Example Language: Verilog
...
logic [31-1:0] data_d, data_q; ... logic [512-1:0] pass_data; ...
Write: begin
...
...
end
if (pass_mode) begin
pass_data = { {60{8'h00}}, data_d};
...state_d = PassChk; pass_mode = 1'b0; The vulnerable code shows an incorrect implementation of the HMAC authentication where it only uses the least significant 32 bits of the secret message for the authentication (the remaining 480 bits are hard coded as zeros). As a result, the system is susceptible to brute-force attacks on the access control mechanism of JTAG, where the attacker only needs to determine 32 bits of the secret message instead of 512 bits. To mitigate this issue, remove the zero padding and use all 512 bits of the secret message for HMAC authentication [REF-1365]. (good code)
Example Language: Verilog
...
logic [512-1:0] data_d, data_q; ... logic [512-1:0] pass_data; ...
Write: begin
...
...
end
if (pass_mode) begin
pass_data = data_d;
...state_d = PassChk; pass_mode = 1'b0; Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Relationship
CWE-1191 and CWE-1244 both involve physical debug access,
but the weaknesses are different. CWE-1191 is effectively
about missing authorization for a debug interface,
i.e. JTAG. CWE-1244 is about providing internal assets with
the wrong debug access level, exposing the asset to
untrusted debug agents.
CWE-1233: Security-Sensitive Hardware Controls with Missing Lock Bit Protection
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a register lock bit protection mechanism, but it does not ensure that the lock bit prevents modification of system registers or controls that perform changes to important hardware system configuration.
Integrated circuits and hardware intellectual properties (IPs) might provide device configuration controls that need to be programmed after device power reset by a trusted firmware or software module, commonly set by BIOS/bootloader. After reset, there can be an expectation that the controls cannot be used to perform any further modification. This behavior is commonly implemented using a trusted lock bit, which can be set to disable writes to a protected set of registers or address regions. The lock protection is intended to prevent modification of certain system configuration (e.g., memory/memory protection unit configuration). However, if the lock bit does not effectively write-protect all system registers or controls that could modify the protected system configuration, then an adversary may be able to use software to access the registers/controls and modify the protected hardware configuration. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the example design below for a digital thermal sensor that detects overheating of the silicon and triggers system shutdown. The system critical temperature limit (CRITICAL_TEMP_LIMIT) and thermal sensor calibration (TEMP_SENSOR_CALIB) data have to be programmed by the firmware. (bad code)
Example Language: Other
In this example note that only the CRITICAL_TEMP_LIMIT register is protected by the TEMP_SENSOR_LOCK bit, while the security design intent is to protect any modification of the critical temperature detection and response. The response of the system, if the system heats to a critical temperature, is controlled by TEMP_HW_SHUTDOWN bit [1], which is not lockable. Also, the TEMP_SENSOR_CALIB register is not protected by the lock bit. By modifying the temperature sensor calibration, the conversion of the sensor data to a degree centigrade can be changed, such that the current temperature will never be detected to exceed critical temperature value programmed by the protected lock. Similarly, by modifying the TEMP_HW_SHUTDOWN.Enable bit, the system response detection of the current temperature exceeding critical temperature can be disabled. (good code)
Example Language: Other
Change TEMP_HW_SHUTDOWN and TEMP_SENSOR_CALIB controls to be locked by TEMP_SENSOR_LOCK.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-226: Sensitive Information in Resource Not Removed Before Reuse
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product releases a resource such as memory or a file so that it can be made available for reuse, but it does not clear or "zeroize" the information contained in the resource before the product performs a critical state transition or makes the resource available for reuse by other entities.
When resources are released, they can be made available for reuse. For example, after memory is de-allocated, an operating system may make the memory available to another process, or disk space may be reallocated when a file is deleted. As removing information requires time and additional resources, operating systems do not usually clear the previously written information. Even when the resource is reused by the same process, this weakness can arise when new data is not as large as the old data, which leaves portions of the old data still available. Equivalent errors can occur in other situations where the length of data is variable but the associated data structure is not. If memory is not cleared after use, the information may be read by less trustworthy parties when the memory is reallocated. This weakness can apply in hardware, such as when a device or system switches between power, sleep, or debug states during normal operation, or when execution changes to different users or privilege levels. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example shows how an attacker can take advantage of an incorrect state transition.
Suppose a device is transitioning from state A to state B. During state A, it can read certain private keys from the hidden fuses that are only accessible in state A but not in state B. The device reads the keys, performs operations using those keys, then transitions to state B, where those private keys should no longer be accessible. (bad code)
Example Language: Other
During the transition from A to B, the device does not scrub the memory. After the transition to state B, even though the private keys are no longer accessible directly from the fuses in state B, they can be accessed indirectly by reading the memory that contains the private keys. (good code)
Example Language: Other
For transition from state A to state B, remove information which should not be available once the transition is complete.
Example 2 The following code calls realloc() on a buffer containing sensitive data: (bad code)
Example Language: C
cleartext_buffer = get_secret();...
cleartext_buffer = realloc(cleartext_buffer, 1024); ... scrub_memory(cleartext_buffer, 1024); There is an attempt to scrub the sensitive data from memory, but realloc() is used, so it could return a pointer to a different part of memory. The memory that was originally allocated for cleartext_buffer could still contain an uncleared copy of the data. Example 3 The following example code is excerpted from the AES wrapper/interface, aes0_wrapper, module of one of the AES engines (AES0) in the Hack@DAC'21 buggy OpenPiton System-on-Chip (SoC). Note that this SoC contains three distinct AES engines. Within this wrapper module, four 32-bit registers are utilized to store the message intended for encryption, referred to as p_c[i]. Using the AXI Lite interface, these registers are filled with the 128-bit message to be encrypted. (bad code)
Example Language: Verilog
module aes0_wrapper #(...)(...); ... always @(posedge clk_i)
begin
if(~(rst_ni && ~rst_1)) //clear p_c[i] at reset
endmodule
begin
else if(en && we)
start <= 0;
endp_c[0] <= 0; p_c[1] <= 0; p_c[2] <= 0; p_c[3] <= 0; ...
case(address[8:3])
end // always @ (posedge wb_clk_i)
0:
endcase
start <= reglk_ctrl_i[1] ? start : wdata[0];
1:
p_c[3] <= reglk_ctrl_i[3] ? p_c[3] : wdata[31:0];
2:
p_c[2] <= reglk_ctrl_i[3] ? p_c[2] : wdata[31:0];
3:
p_c[1] <= reglk_ctrl_i[3] ? p_c[1] : wdata[31:0];
4:
p_c[0] <= reglk_ctrl_i[3] ? p_c[0] : wdata[31:0];
...The above code snippet [REF-1402] illustrates an instance of a vulnerable implementation of the AES wrapper module, where p_c[i] registers are cleared at reset. Otherwise, p_c[i]registers either maintain their old values (if reglk_ctrl_i[3]is true) or get filled through the AXI signal wdata. Note that p_c[i]registers can be read through the AXI Lite interface (not shown in snippet). However, p_c[i] registers are never cleared after their usage once the AES engine has completed the encryption process of the message. In a multi-user or multi-process environment, not clearing registers may result in the attacker process accessing data left by the victim, leading to data leakage or unintentional information disclosure. To fix this issue, it is essential to ensure that these internal registers are cleared in a timely manner after their usage, i.e., the encryption process is complete. This is illustrated below by monitoring the assertion of the cipher text valid signal, ct_valid [REF-1403]. (good code)
Example Language: Verilog
module aes0_wrapper #(...)(...); ... always @(posedge clk_i)
begin
if(~(rst_ni && ~rst_1)) //clear p_c[i] at reset
endmodule
...
else if(ct_valid) //encryption process complete, clear p_c[i]
begin
else if(en && we)
p_c[0] <= 0;
endp_c[1] <= 0; p_c[2] <= 0; p_c[3] <= 0;
case(address[8:3])
end // always @ (posedge wb_clk_i)... endcase Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Relationship
There is a close association between CWE-226 and CWE-212. The difference is partially that of perspective. CWE-226 is geared towards the final stage of the resource lifecycle, in which the resource is deleted, eliminated, expired, or otherwise released for reuse. Technically, this involves a transfer to a different control sphere, in which the original contents of the resource are no longer relevant. CWE-212, however, is intended for sensitive data in resources that are intentionally shared with others, so they are still active. This distinction is useful from the perspective of the CWE research view (CWE-1000).
Research Gap
This is frequently found for network packets, but it can also exist in local memory allocation, files, etc.
Maintenance
This entry needs modification to clarify the differences with CWE-212. The description also combines two problems that are distinct from the CWE research perspective: the inadvertent transfer of information to another sphere, and improper initialization/shutdown. Some of the associated taxonomy mappings reflect these different uses.
CWE-1272: Sensitive Information Uncleared Before Debug/Power State Transition
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product performs a power or debug state transition, but it does not clear sensitive information that should no longer be accessible due to changes to information access restrictions.
A device or system frequently employs many power and sleep states during its normal operation (e.g., normal power, additional power, low power, hibernate, deep sleep, etc.). A device also may be operating within a debug condition. State transitions can happen from one power or debug state to another. If there is information available in the previous state which should not be available in the next state and is not properly removed before the transition into the next state, sensitive information may leak from the system. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example shows how an attacker can take advantage of an incorrect state transition.
Suppose a device is transitioning from state A to state B. During state A, it can read certain private keys from the hidden fuses that are only accessible in state A but not in state B. The device reads the keys, performs operations using those keys, then transitions to state B, where those private keys should no longer be accessible. (bad code)
Example Language: Other
During the transition from A to B, the device does not scrub the memory. After the transition to state B, even though the private keys are no longer accessible directly from the fuses in state B, they can be accessed indirectly by reading the memory that contains the private keys. (good code)
Example Language: Other
For transition from state A to state B, remove information which should not be available once the transition is complete.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
More information is available — Please edit the custom filter or select a different filter. |
Use of the Common Weakness Enumeration (CWE™) and the associated references from this website are subject to the Terms of Use. CWE is sponsored by the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and managed by the Homeland Security Systems Engineering and Development Institute (HSSEDI) which is operated by The MITRE Corporation (MITRE). Copyright © 2006–2025, The MITRE Corporation. CWE, CWSS, CWRAF, and the CWE logo are trademarks of The MITRE Corporation. |