CWE VIEW: Hardware Design
This view organizes weaknesses around concepts that are frequently used or encountered in hardware design. Accordingly, this view can align closely with the perspectives of designers, manufacturers, educators, and assessment vendors. It provides a variety of categories that are intended to simplify navigation, browsing, and mapping.
The following graph shows the tree-like relationships between
weaknesses that exist at different levels of abstraction. At the highest level, categories
and pillars exist to group weaknesses. Categories (which are not technically weaknesses) are
special CWE entries used to group weaknesses that share a common characteristic. Pillars are
weaknesses that are described in the most abstract fashion. Below these top-level entries
are weaknesses are varying levels of abstraction. Classes are still very abstract, typically
independent of any specific language or technology. Base level weaknesses are used to
present a more specific type of weakness. A variant is a weakness that is described at a
very low level of detail, typically limited to a specific language or technology. A chain is
a set of weaknesses that must be reachable consecutively in order to produce an exploitable
vulnerability. While a composite is a set of weaknesses that must all be present
simultaneously in order to produce an exploitable vulnerability.
Show Details:
1194 - Hardware Design
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns)
Weaknesses in this category are root-caused to defects that arise in the semiconductor-manufacturing process or during the life cycle and supply chain.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1059
(Insufficient Technical Documentation)
The product does not contain sufficient
technical or engineering documentation (whether on paper or
in electronic form) that contains descriptions of all the
relevant software/hardware elements of the product, such as
its usage, structure, architectural components, interfaces, design, implementation,
configuration, operation, etc.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1248
(Semiconductor Defects in Hardware Logic with Security-Sensitive Implications)
The security-sensitive hardware module contains semiconductor defects.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1266
(Improper Scrubbing of Sensitive Data from Decommissioned Device)
The product does not properly provide a capability for the product administrator to remove sensitive data at the time the product is decommissioned. A scrubbing capability could be missing, insufficient, or incorrect.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1269
(Product Released in Non-Release Configuration)
The product released to market is released in pre-production or manufacturing configuration.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1273
(Device Unlock Credential Sharing)
The credentials necessary for unlocking a device are shared across multiple parties and may expose sensitive information.
![]() ![]()
1194
(Hardware Design) >
1195
(Manufacturing and Life Cycle Management Concerns) >
1297
(Unprotected Confidential Information on Device is Accessible by OSAT Vendors)
The product does not adequately protect confidential information on the device from being accessed by Outsourced Semiconductor Assembly and Test (OSAT) vendors.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues)
Weaknesses in this category are related to improper design of full-system security flows, including but not limited to secure boot, secure update, and hardware-device attestation.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1190
(DMA Device Enabled Too Early in Boot Phase)
The product enables a Direct Memory Access (DMA) capable device before the security configuration settings are established, which allows an attacker to extract data from or gain privileges on the product.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1193
(Power-On of Untrusted Execution Core Before Enabling Fabric Access Control)
The product enables components that contain untrusted firmware before memory and fabric access controls have been enabled.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1264
(Hardware Logic with Insecure De-Synchronization between Control and Data Channels)
The hardware logic for error handling and security checks can incorrectly forward data before the security check is complete.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1274
(Improper Access Control for Volatile Memory Containing Boot Code)
The product conducts a secure-boot process that transfers bootloader code from Non-Volatile Memory (NVM) into Volatile Memory (VM), but it does not have sufficient access control or other protections for the Volatile Memory.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1283
(Mutable Attestation or Measurement Reporting Data)
The register contents used for attestation or measurement reporting data to verify boot flow are modifiable by an adversary.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1310
(Missing Ability to Patch ROM Code)
Missing an ability to patch ROM code may leave a System or System-on-Chip (SoC) in a vulnerable state.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1326
(Missing Immutable Root of Trust in Hardware)
A missing immutable root of trust in the hardware results in the ability to bypass secure boot or execute untrusted or adversarial boot code.
![]() ![]()
1194
(Hardware Design) >
1196
(Security Flow Issues) >
1328
(Security Version Number Mutable to Older Versions)
Security-version number in hardware is mutable, resulting in the ability to downgrade (roll-back) the boot firmware to vulnerable code versions.
![]() ![]()
1194
(Hardware Design) >
1197
(Integration Issues)
Weaknesses in this category are those that arise due to integration of multiple hardware Intellectual Property (IP) cores, from System-on-a-Chip (SoC) subsystem interactions, or from hardware platform subsystem interactions.
![]() ![]()
1194
(Hardware Design) >
1197
(Integration Issues) >
1276
(Hardware Child Block Incorrectly Connected to Parent System)
Signals between a hardware IP and the parent system design are incorrectly connected causing security risks.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues)
Weaknesses in this category are related to features and mechanisms providing hardware-based isolation and access control (e.g., identity, policy, locking control) of sensitive shared hardware resources such as registers and fuses.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
276
(Incorrect Default Permissions)
During installation, installed file permissions are set to allow anyone to modify those files.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
441
(Unintended Proxy or Intermediary ('Confused Deputy'))
The product receives a request, message, or directive from an upstream component, but the product does not sufficiently preserve the original source of the request before forwarding the request to an external actor that is outside of the product's control sphere. This causes the product to appear to be the source of the request, leading it to act as a proxy or other intermediary between the upstream component and the external actor.
Confused Deputy
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1189
(Improper Isolation of Shared Resources on System-on-a-Chip (SoC))
The System-On-a-Chip (SoC) does not properly isolate shared resources between trusted and untrusted agents.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1192
(Improper Identifier for IP Block used in System-On-Chip (SOC))
The System-on-Chip (SoC) does not have unique, immutable identifiers for each of its components.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1220
(Insufficient Granularity of Access Control)
The product implements access controls via a policy or other feature with the intention to disable or restrict accesses (reads and/or writes) to assets in a system from untrusted agents. However, implemented access controls lack required granularity, which renders the control policy too broad because it allows accesses from unauthorized agents to the security-sensitive assets.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1222
(Insufficient Granularity of Address Regions Protected by Register Locks)
The product defines a large address region protected from modification by the same register lock control bit. This results in a conflict between the functional requirement that some addresses need to be writable by software during operation and the security requirement that the system configuration lock bit must be set during the boot process.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1242
(Inclusion of Undocumented Features or Chicken Bits)
The device includes chicken bits or undocumented features that can create entry points for unauthorized actors.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1260
(Improper Handling of Overlap Between Protected Memory Ranges)
The product allows address regions to overlap, which can result in the bypassing of intended memory protection.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1262
(Improper Access Control for Register Interface)
The product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1267
(Policy Uses Obsolete Encoding)
The product uses an obsolete encoding mechanism to implement access controls.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1268
(Policy Privileges are not Assigned Consistently Between Control and Data Agents)
The product's hardware-enforced access control for a particular resource improperly accounts for privilege discrepancies between control and write policies.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1280
(Access Control Check Implemented After Asset is Accessed)
A product's hardware-based access control check occurs after the asset has been accessed.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1294
(Insecure Security Identifier Mechanism)
The System-on-Chip (SoC) implements a Security Identifier mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Identifiers are not correctly implemented.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1294
(Insecure Security Identifier Mechanism) >
1259
(Improper Restriction of Security Token Assignment)
The System-On-A-Chip (SoC) implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens are improperly protected.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1294
(Insecure Security Identifier Mechanism) >
1270
(Generation of Incorrect Security Tokens)
The product implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens generated in the system are incorrect.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1294
(Insecure Security Identifier Mechanism) >
1290
(Incorrect Decoding of Security Identifiers )
The product implements a decoding mechanism to decode certain bus-transaction signals to security identifiers. If the decoding is implemented incorrectly, then untrusted agents can now gain unauthorized access to the asset.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1294
(Insecure Security Identifier Mechanism) >
1292
(Incorrect Conversion of Security Identifiers)
The product implements a conversion mechanism to map certain bus-transaction signals to security identifiers. However, if the conversion is incorrectly implemented, untrusted agents can gain unauthorized access to the asset.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1299
(Missing Protection Mechanism for Alternate Hardware Interface)
The lack of protections on alternate paths to access
control-protected assets (such as unprotected shadow registers
and other external facing unguarded interfaces) allows an
attacker to bypass existing protections to the asset that are
only performed against the primary path.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1302
(Missing Source Identifier in Entity Transactions on a System-On-Chip (SOC))
The product implements a security identifier mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. A transaction is sent without a security identifier.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1303
(Non-Transparent Sharing of Microarchitectural Resources)
Hardware structures shared across execution contexts (e.g., caches and branch predictors) can violate the expected architecture isolation between contexts.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1314
(Missing Write Protection for Parametric Data Values)
The device does not write-protect the parametric data values for sensors that scale the sensor value, allowing untrusted software to manipulate the apparent result and potentially damage hardware or cause operational failure.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1318
(Missing Support for Security Features in On-chip Fabrics or Buses)
On-chip fabrics or buses either do not support or are not configured to support privilege separation or other security features, such as access control.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1334
(Unauthorized Error Injection Can Degrade Hardware Redundancy)
An unauthorized agent can inject errors into a redundant block to deprive the system of redundancy or put the system in a degraded operating mode.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1420
(Exposure of Sensitive Information during Transient Execution)
A processor event or prediction may allow incorrect operations (or correct operations with incorrect data) to execute transiently, potentially exposing data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1421
(Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution)
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1422
(Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution)
A processor event or prediction may allow incorrect or stale data to
be forwarded to transient operations, potentially exposing data over a
covert channel.
![]() ![]()
1194
(Hardware Design) >
1198
(Privilege Separation and Access Control Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1423
(Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution)
Shared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns)
Weaknesses in this category are related to hardware-circuit design and logic (e.g., CMOS transistors, finite state machines, and registers) as well as issues related to hardware description languages such as System Verilog and VHDL.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1209
(Failure to Disable Reserved Bits)
The reserved bits in a hardware design are not disabled prior to production. Typically, reserved bits are used for future capabilities and should not support any functional logic in the design. However, designers might covertly use these bits to debug or further develop new capabilities in production hardware. Adversaries with access to these bits will write to them in hopes of compromising hardware state.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1221
(Incorrect Register Defaults or Module Parameters)
Hardware description language code incorrectly defines register defaults or hardware Intellectual Property (IP) parameters to insecure values.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1223
(Race Condition for Write-Once Attributes)
A write-once register in hardware design is programmable by an untrusted software component earlier than the trusted software component, resulting in a race condition issue.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1224
(Improper Restriction of Write-Once Bit Fields)
The hardware design control register "sticky bits" or write-once bit fields are improperly implemented, such that they can be reprogrammed by software.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1231
(Improper Prevention of Lock Bit Modification)
The product uses a trusted lock bit for restricting access to registers, address regions, or other resources, but the product does not prevent the value of the lock bit from being modified after it has been set.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1232
(Improper Lock Behavior After Power State Transition)
Register lock bit protection disables changes to system configuration once the bit is set. Some of the protected registers or lock bits become programmable after power state transitions (e.g., Entry and wake from low power sleep modes) causing the system configuration to be changeable.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1233
(Security-Sensitive Hardware Controls with Missing Lock Bit Protection)
The product uses a register lock bit protection mechanism, but it does not ensure that the lock bit prevents modification of system registers or controls that perform changes to important hardware system configuration.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1234
(Hardware Internal or Debug Modes Allow Override of Locks)
System configuration protection may be bypassed during debug mode.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1245
(Improper Finite State Machines (FSMs) in Hardware Logic)
Faulty finite state machines (FSMs) in the hardware logic allow an attacker to put the system in an undefined state, to cause a denial of service (DoS) or gain privileges on the victim's system.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1250
(Improper Preservation of Consistency Between Independent Representations of Shared State)
The product has or supports multiple distributed components or sub-systems that are each required to keep their own local copy of shared data - such as state or cache - but the product does not ensure that all local copies remain consistent with each other.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1253
(Incorrect Selection of Fuse Values)
The logic level used to set a system to a secure state relies on a fuse being unblown. An attacker can set the system to an insecure state merely by blowing the fuse.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1254
(Incorrect Comparison Logic Granularity)
The product's comparison logic is performed over a series of steps rather than across the entire string in one operation. If there is a comparison logic failure on one of these steps, the operation may be vulnerable to a timing attack that can result in the interception of the process for nefarious purposes.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1261
(Improper Handling of Single Event Upsets)
The hardware logic does not effectively handle when single-event upsets (SEUs) occur.
![]() ![]()
1194
(Hardware Design) >
1199
(General Circuit and Logic Design Concerns) >
1298
(Hardware Logic Contains Race Conditions)
A race condition in the hardware logic results in undermining security guarantees of the system.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues)
Weaknesses in this category are typically associated with CPUs, Graphics, Vision, AI, FPGA, and microcontrollers.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1252
(CPU Hardware Not Configured to Support Exclusivity of Write and Execute Operations)
The CPU is not configured to provide hardware support for exclusivity of write and execute operations on memory. This allows an attacker to execute data from all of memory.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1281
(Sequence of Processor Instructions Leads to Unexpected Behavior)
Specific combinations of processor instructions lead to undesirable behavior such as locking the processor until a hard reset performed.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1342
(Information Exposure through Microarchitectural State after Transient Execution)
The processor does not properly clear microarchitectural state after incorrect microcode assists or speculative execution, resulting in transient execution.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1420
(Exposure of Sensitive Information during Transient Execution)
A processor event or prediction may allow incorrect operations (or correct operations with incorrect data) to execute transiently, potentially exposing data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1421
(Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution)
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1422
(Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution)
A processor event or prediction may allow incorrect or stale data to
be forwarded to transient operations, potentially exposing data over a
covert channel.
![]() ![]()
1194
(Hardware Design) >
1201
(Core and Compute Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1423
(Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution)
Shared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues)
Weaknesses in this category are typically associated with memory (e.g., DRAM, SRAM) and storage technologies (e.g., NAND Flash, OTP, EEPROM, and eMMC).
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
226
(Sensitive Information in Resource Not Removed Before Reuse)
The product releases a resource such as memory or a file so that it can be made available for reuse, but it does not clear or "zeroize" the information contained in the resource before the product performs a critical state transition or makes the resource available for reuse by other entities.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
226
(Sensitive Information in Resource Not Removed Before Reuse) >
1239
(Improper Zeroization of Hardware Register)
The hardware product does not properly clear sensitive information from built-in registers when the user of the hardware block changes.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
226
(Sensitive Information in Resource Not Removed Before Reuse) >
1342
(Information Exposure through Microarchitectural State after Transient Execution)
The processor does not properly clear microarchitectural state after incorrect microcode assists or speculative execution, resulting in transient execution.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1246
(Improper Write Handling in Limited-write Non-Volatile Memories)
The product does not implement or incorrectly implements wear leveling operations in limited-write non-volatile memories.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1251
(Mirrored Regions with Different Values)
The product's architecture mirrors regions without ensuring that their contents always stay in sync.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1257
(Improper Access Control Applied to Mirrored or Aliased Memory Regions)
Aliased or mirrored memory regions in hardware designs may have inconsistent read/write permissions enforced by the hardware. A possible result is that an untrusted agent is blocked from accessing a memory region but is not blocked from accessing the corresponding aliased memory region.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1282
(Assumed-Immutable Data is Stored in Writable Memory)
Immutable data, such as a first-stage bootloader, device identifiers, and "write-once" configuration settings are stored in writable memory that can be re-programmed or updated in the field.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1420
(Exposure of Sensitive Information during Transient Execution)
A processor event or prediction may allow incorrect operations (or correct operations with incorrect data) to execute transiently, potentially exposing data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1421
(Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution)
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1422
(Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution)
A processor event or prediction may allow incorrect or stale data to
be forwarded to transient operations, potentially exposing data over a
covert channel.
![]() ![]()
1194
(Hardware Design) >
1202
(Memory and Storage Issues) >
1420
(Exposure of Sensitive Information during Transient Execution) >
1423
(Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution)
Shared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems)
Weaknesses in this category are related to hardware
security problems that apply to peripheral devices, IO
interfaces, on-chip interconnects, network-on-chip (NoC),
and buses. For example, this category includes issues
related to design of hardware interconnect and/or protocols
such as PCIe, USB, SMBUS, general-purpose IO pins, and
user-input peripherals such as mouse and keyboard.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1311
(Improper Translation of Security Attributes by Fabric Bridge)
The bridge incorrectly translates security attributes from either trusted to untrusted or from untrusted to trusted when converting from one fabric protocol to another.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1312
(Missing Protection for Mirrored Regions in On-Chip Fabric Firewall)
The firewall in an on-chip fabric protects the main addressed region, but it does not protect any mirrored memory or memory-mapped-IO (MMIO) regions.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1315
(Improper Setting of Bus Controlling Capability in Fabric End-point)
The bus controller enables bits in the fabric end-point to allow responder devices to control transactions on the fabric.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1316
(Fabric-Address Map Allows Programming of Unwarranted Overlaps of Protected and Unprotected Ranges)
The address map of the on-chip fabric has protected and unprotected regions overlapping, allowing an attacker to bypass access control to the overlapping portion of the protected region.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1317
(Improper Access Control in Fabric Bridge)
The product uses a fabric bridge for transactions between two Intellectual Property (IP) blocks, but the bridge does not properly perform the expected privilege, identity, or other access control checks between those IP blocks.
![]() ![]()
1194
(Hardware Design) >
1203
(Peripherals, On-chip Fabric, and Interface/IO Problems) >
1331
(Improper Isolation of Shared Resources in Network On Chip (NoC))
The Network On Chip (NoC) does not isolate or incorrectly isolates its on-chip-fabric and internal resources such that they are shared between trusted and untrusted agents, creating timing channels.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues)
Weaknesses in this category are related to hardware implementations of cryptographic protocols and other hardware-security primitives such as physical unclonable functions (PUFs) and random number generators (RNGs).
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
203
(Observable Discrepancy)
The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not.
Side Channel Attack
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
203
(Observable Discrepancy) >
1300
(Improper Protection of Physical Side Channels)
The device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
325
(Missing Cryptographic Step)
The product does not implement a required step in a cryptographic algorithm, resulting in weaker encryption than advertised by the algorithm.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
1240
(Use of a Cryptographic Primitive with a Risky Implementation)
To fulfill the need for a cryptographic primitive, the product implements a cryptographic algorithm using a non-standard, unproven, or disallowed/non-compliant cryptographic implementation.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
1241
(Use of Predictable Algorithm in Random Number Generator)
The device uses an algorithm that is predictable and generates a pseudo-random number.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
1279
(Cryptographic Operations are run Before Supporting Units are Ready)
Performing cryptographic operations without ensuring that the supporting inputs are ready to supply valid data may compromise the cryptographic result.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
1351
(Improper Handling of Hardware Behavior in Exceptionally Cold Environments)
A hardware device, or the firmware running on it, is
missing or has incorrect protection features to maintain
goals of security primitives when the device is cooled below
standard operating temperatures.
![]() ![]()
1194
(Hardware Design) >
1205
(Security Primitives and Cryptography Issues) >
1431
(Driving Intermediate Cryptographic State/Results to Hardware Module Outputs)
The product uses a hardware module implementing a cryptographic
algorithm that writes sensitive information about the intermediate
state or results of its cryptographic operations via one of its output
wires (typically the output port containing the final result).
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns)
Weaknesses in this category are related to system power, voltage, current, temperature, clocks, system state saving/restoring, and resets at the platform and SoC level.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1232
(Improper Lock Behavior After Power State Transition)
Register lock bit protection disables changes to system configuration once the bit is set. Some of the protected registers or lock bits become programmable after power state transitions (e.g., Entry and wake from low power sleep modes) causing the system configuration to be changeable.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1247
(Improper Protection Against Voltage and Clock Glitches)
The device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1248
(Semiconductor Defects in Hardware Logic with Security-Sensitive Implications)
The security-sensitive hardware module contains semiconductor defects.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1255
(Comparison Logic is Vulnerable to Power Side-Channel Attacks)
A device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1256
(Improper Restriction of Software Interfaces to Hardware Features)
The product provides software-controllable
device functionality for capabilities such as power and
clock management, but it does not properly limit
functionality that can lead to modification of
hardware memory or register bits, or the ability to
observe physical side channels.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1271
(Uninitialized Value on Reset for Registers Holding Security Settings)
Security-critical logic is not set to a known value on reset.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1304
(Improperly Preserved Integrity of Hardware Configuration State During a Power Save/Restore Operation)
The product performs a power save/restore
operation, but it does not ensure that the integrity of
the configuration state is maintained and/or verified between
the beginning and ending of the operation.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1314
(Missing Write Protection for Parametric Data Values)
The device does not write-protect the parametric data values for sensors that scale the sensor value, allowing untrusted software to manipulate the apparent result and potentially damage hardware or cause operational failure.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1320
(Improper Protection for Outbound Error Messages and Alert Signals)
Untrusted agents can disable alerts about signal conditions exceeding limits or the response mechanism that handles such alerts.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1332
(Improper Handling of Faults that Lead to Instruction Skips)
The device is missing or incorrectly implements circuitry or sensors that detect and mitigate the skipping of security-critical CPU instructions when they occur.
![]() ![]()
1194
(Hardware Design) >
1206
(Power, Clock, Thermal, and Reset Concerns) >
1338
(Improper Protections Against Hardware Overheating)
A hardware device is missing or has inadequate protection features to prevent overheating.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems)
Weaknesses in this category are related to hardware debug and test interfaces such as JTAG and scan chain.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1191
(On-Chip Debug and Test Interface With Improper Access Control)
The chip does not implement or does not correctly perform access control to check whether users are authorized to access internal registers and test modes through the physical debug/test interface.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1234
(Hardware Internal or Debug Modes Allow Override of Locks)
System configuration protection may be bypassed during debug mode.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1243
(Sensitive Non-Volatile Information Not Protected During Debug)
Access to security-sensitive information stored in fuses is not limited during debug.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1244
(Internal Asset Exposed to Unsafe Debug Access Level or State)
The product uses physical debug or test
interfaces with support for multiple access levels, but it
assigns the wrong debug access level to an internal asset,
providing unintended access to the asset from untrusted debug
agents.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1258
(Exposure of Sensitive System Information Due to Uncleared Debug Information)
The hardware does not fully clear security-sensitive values, such as keys and intermediate values in cryptographic operations, when debug mode is entered.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1272
(Sensitive Information Uncleared Before Debug/Power State Transition)
The product performs a power or debug state transition, but it does not clear sensitive information that should no longer be accessible due to changes to information access restrictions.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1291
(Public Key Re-Use for Signing both Debug and Production Code)
The same public key is used for signing both debug and production code.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1295
(Debug Messages Revealing Unnecessary Information)
The product fails to adequately prevent the revealing of unnecessary and potentially sensitive system information within debugging messages.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1296
(Incorrect Chaining or Granularity of Debug Components)
The product's debug components contain incorrect chaining or granularity of debug components.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1313
(Hardware Allows Activation of Test or Debug Logic at Runtime)
During runtime, the hardware allows for test or debug logic (feature) to be activated, which allows for changing the state of the hardware. This feature can alter the intended behavior of the system and allow for alteration and leakage of sensitive data by an adversary.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
1323
(Improper Management of Sensitive Trace Data)
Trace data collected from several sources on the
System-on-Chip (SoC) is stored in unprotected locations or
transported to untrusted agents.
![]() ![]()
1194
(Hardware Design) >
1207
(Debug and Test Problems) >
319
(Cleartext Transmission of Sensitive Information)
The product transmits sensitive or security-critical data in cleartext in a communication channel that can be sniffed by unauthorized actors.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems)
Weaknesses in this category can arise in multiple areas of hardware design or can apply to a wide cross-section of components.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
440
(Expected Behavior Violation)
A feature, API, or function does not perform according to its specification.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1053
(Missing Documentation for Design)
The product does not have documentation that represents how it is designed.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1059
(Insufficient Technical Documentation)
The product does not contain sufficient
technical or engineering documentation (whether on paper or
in electronic form) that contains descriptions of all the
relevant software/hardware elements of the product, such as
its usage, structure, architectural components, interfaces, design, implementation,
configuration, operation, etc.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1263
(Improper Physical Access Control)
The product is designed with access restricted to certain information, but it does not sufficiently protect against an unauthorized actor with physical access to these areas.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1277
(Firmware Not Updateable)
The product does not provide its
users with the ability to update or patch its
firmware to address any vulnerabilities or
weaknesses that may be present.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1301
(Insufficient or Incomplete Data Removal within Hardware Component)
The product's data removal process does not completely delete all data and potentially sensitive information within hardware components.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1301
(Insufficient or Incomplete Data Removal within Hardware Component) >
1330
(Remanent Data Readable after Memory Erase)
Confidential information stored in memory circuits is readable or recoverable after being cleared or erased.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1329
(Reliance on Component That is Not Updateable)
The product contains a component that cannot be updated or patched in order to remove vulnerabilities or significant bugs.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1357
(Reliance on Insufficiently Trustworthy Component)
The product is built from multiple separate components, but it uses a component that is not sufficiently trusted to meet expectations for security, reliability, updateability, and maintainability.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1357
(Reliance on Insufficiently Trustworthy Component) >
1329
(Reliance on Component That is Not Updateable)
The product contains a component that cannot be updated or patched in order to remove vulnerabilities or significant bugs.
![]() ![]()
1194
(Hardware Design) >
1208
(Cross-Cutting Problems) >
1429
(Missing Security-Relevant Feedback for Unexecuted Operations in Hardware Interface)
The product has a hardware interface that silently discards operations
in situations for which feedback would be security-relevant, such as
the timely detection of failures or attacks.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns)
Weaknesses in this category are related to concerns of physical access.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1384
(Improper Handling of Physical or Environmental Conditions)
The product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1319
(Improper Protection against Electromagnetic Fault Injection (EM-FI))
The device is susceptible to electromagnetic fault injection attacks, causing device internal information to be compromised or security mechanisms to be bypassed.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1247
(Improper Protection Against Voltage and Clock Glitches)
The device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1261
(Improper Handling of Single Event Upsets)
The hardware logic does not effectively handle when single-event upsets (SEUs) occur.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1332
(Improper Handling of Faults that Lead to Instruction Skips)
The device is missing or incorrectly implements circuitry or sensors that detect and mitigate the skipping of security-critical CPU instructions when they occur.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1351
(Improper Handling of Hardware Behavior in Exceptionally Cold Environments)
A hardware device, or the firmware running on it, is
missing or has incorrect protection features to maintain
goals of security primitives when the device is cooled below
standard operating temperatures.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1278
(Missing Protection Against Hardware Reverse Engineering Using Integrated Circuit (IC) Imaging Techniques)
Information stored in hardware may be recovered by an attacker with the capability to capture and analyze images of the integrated circuit using techniques such as scanning electron microscopy.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1255
(Comparison Logic is Vulnerable to Power Side-Channel Attacks)
A device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1300
(Improper Protection of Physical Side Channels)
The device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions.
![]() ![]()
1194
(Hardware Design) >
1388
(Physical Access Issues and Concerns) >
1248
(Semiconductor Defects in Hardware Logic with Security-Sensitive Implications)
The security-sensitive hardware module contains semiconductor defects.
Other
The top level categories in this view represent commonly understood areas/terms within hardware design, and are meant to aid the user in identifying potential related weaknesses. It is possible for the same weakness to exist within multiple different categories.
Other
This view attempts to present weaknesses in a simple and intuitive way. As such it targets a single level of abstraction. It is important to realize that not every CWE will be represented in this view. High-level class weaknesses and low-level variant weaknesses are mostly ignored. However, by exploring the weaknesses that are included, and following the defined relationships, one can find these higher and lower level weaknesses.
View ComponentsA | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
CWE-1280: Access Control Check Implemented After Asset is Accessed
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA product's hardware-based access control check occurs after the asset has been accessed.
The product implements a hardware-based access control check. The asset should be accessible only after the check is successful. If, however, this operation is not atomic and the asset is accessed before the check is complete, the security of the system may be compromised. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Assume that the module foo_bar implements a protected register. The register content is the asset. Only transactions made by user id (indicated by signal usr_id) 0x4 are allowed to modify the register contents. The signal grant_access is used to provide access. (bad code)
Example Language: Verilog
module foo_bar(data_out, usr_id, data_in, clk, rst_n);
output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; wire grant_access; always @ (posedge clk or negedge rst_n) begin
if (!rst_n)
end
data_out = 0;
else
data_out = (grant_access) ? data_in : data_out;
assign grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; endmodule This code uses Verilog blocking assignments for data_out and grant_access. Therefore, these assignments happen sequentially (i.e., data_out is updated to new value first, and grant_access is updated the next cycle) and not in parallel. Therefore, the asset data_out is allowed to be modified even before the access control check is complete and grant_access signal is set. Since grant_access does not have a reset value, it will be meta-stable and will randomly go to either 0 or 1. Flipping the order of the assignment of data_out and grant_access should solve the problem. The correct snippet of code is shown below. (good code)
Example Language: Verilog
always @ (posedge clk or negedge rst_n)
begin
if (!rst_n)
end
data_out = 0;
else
assign grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0;
data_out = (grant_access) ? data_in : data_out; endmodule ![]()
CWE-1282: Assumed-Immutable Data is Stored in Writable Memory
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterImmutable data, such as a first-stage bootloader, device identifiers, and "write-once" configuration settings are stored in writable memory that can be re-programmed or updated in the field.
Security services such as secure boot, authentication of code and data, and device attestation all require assets such as the first stage bootloader, public keys, golden hash digests, etc. which are implicitly trusted. Storing these assets in read-only memory (ROM), fuses, or one-time programmable (OTP) memory provides strong integrity guarantees and provides a root of trust for securing the rest of the system. Security is lost if assets assumed to be immutable can be modified. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Cryptographic hash functions are commonly used to create unique fixed-length digests used to ensure the integrity of code and keys. A golden digest is stored on the device and compared to the digest computed from the data to be verified. If the digests match, the data has not been maliciously modified. If an attacker can modify the golden digest they then have the ability to store arbitrary data that passes the verification check. Hash digests used to verify public keys and early stage boot code should be immutable, with the strongest protection offered by hardware immutability. ![]()
CWE-319: Cleartext Transmission of Sensitive Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom Filter![]()
![]() ![]()
![]()
![]()
![]()
![]()
![]()
![]()
Example 1 The following code attempts to establish a connection to a site to communicate sensitive information. (bad code)
Example Language: Java
try {
URL u = new URL("http://www.secret.example.org/"); }HttpURLConnection hu = (HttpURLConnection) u.openConnection(); hu.setRequestMethod("PUT"); hu.connect(); OutputStream os = hu.getOutputStream(); hu.disconnect(); catch (IOException e) {
//...
}Though a connection is successfully made, the connection is unencrypted and it is possible that all sensitive data sent to or received from the server will be read by unintended actors. Example 2 In 2022, the OT:ICEFALL study examined products by 10 different Operational Technology (OT) vendors. The researchers reported 56 vulnerabilities and said that the products were "insecure by design" [REF-1283]. If exploited, these vulnerabilities often allowed adversaries to change how the products operated, ranging from denial of service to changing the code that the products executed. Since these products were often used in industries such as power, electrical, water, and others, there could even be safety implications. Multiple vendors used cleartext transmission of sensitive information in their OT products. Example 3 A TAP accessible register is read/written by a JTAG based tool, for internal use by authorized users. However, an adversary can connect a probing device and collect the values from the unencrypted channel connecting the JTAG interface to the authorized user, if no additional protections are employed. Example 4 The following Azure CLI command lists the properties of a particular storage account: (informative)
Example Language: Shell
az storage account show -g {ResourceGroupName} -n {StorageAccountName}
The JSON result might be: (bad code)
Example Language: JSON
{
"name": "{StorageAccountName}",
}
"enableHttpsTrafficOnly": false, "type": "Microsoft.Storage/storageAccounts" The enableHttpsTrafficOnly value is set to false, because the default setting for Secure transfer is set to Disabled. This allows cloud storage resources to successfully connect and transfer data without the use of encryption (e.g., HTTP, SMB 2.1, SMB 3.0, etc.). Azure's storage accounts can be configured to only accept requests from secure connections made over HTTPS. The secure transfer setting can be enabled using Azure's Portal (GUI) or programmatically by setting the enableHttpsTrafficOnly property to True on the storage account, such as: (good code)
Example Language: Shell
az storage account update -g {ResourceGroupName} -n {StorageAccountName} --https-only true
The change can be confirmed from the result by verifying that the enableHttpsTrafficOnly value is true: (good code)
Example Language: JSON
{
"name": "{StorageAccountName}",
}
"enableHttpsTrafficOnly": true, "type": "Microsoft.Storage/storageAccounts"
Note: to enable secure transfer using Azure's Portal instead of the command line:
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Other
Applicable communication channels are not limited to software products. Applicable channels include hardware-specific technologies such as internal hardware networks and external debug channels, supporting remote JTAG debugging. When mitigations are not applied to combat adversaries within the product's threat model, this weakness significantly lowers the difficulty of exploitation by such adversaries.
Maintenance
The Taxonomy_Mappings to ISA/IEC 62443 were added in CWE 4.10, but they are still under review and might change in future CWE versions. These draft mappings were performed by members of the "Mapping CWE to 62443" subgroup of the CWE-CAPEC ICS/OT Special Interest Group (SIG), and their work is incomplete as of CWE 4.10. The mappings are included to facilitate discussion and review by the broader ICS/OT community, and they are likely to change in future CWE versions.
CWE-1255: Comparison Logic is Vulnerable to Power Side-Channel Attacks
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token.
The power consumed by a device may be instrumented and monitored in real time. If the algorithm for evaluating security tokens is not sufficiently robust, the power consumption may vary by token entry comparison against the reference value. Further, if retries are unlimited, the power difference between a "good" entry and a "bad" entry may be observed and used to determine whether each entry itself is correct thereby allowing unauthorized parties to calculate the reference value. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider an example hardware module that checks a user-provided password (or PIN) to grant access to a user. The user-provided password is compared against a stored value byte-by-byte. (bad code)
Example Language: C
static nonvolatile password_tries = NUM_RETRIES;
do
while (password_tries == 0) ; // Hang here if no more password tries
while (true)password_ok = 0; for (i = 0; i < NUM_PW_DIGITS; i++)
if (GetPasswordByte() == stored_password([i])
end
password_ok |= 1; // Power consumption is different here
else
password_ok |= 0; // than from here
if (password_ok > 0)
password_tries = NUM_RETRIES;
password_tries--;break_to_Ok_to_proceed // Password OK Since the algorithm uses a different number of 1's and 0's for password validation, a different amount of power is consumed for the good byte versus the bad byte comparison. Using this information, an attacker may be able to guess the correct password for that byte-by-byte iteration with several repeated attempts by stopping the password evaluation before it completes. Among various options for mitigating the string comparison is obscuring the power consumption by having opposing bit flips during bit operations. Note that in this example, the initial change of the bit values could still provide power indication depending upon the hardware itself. This possibility needs to be measured for verification. (good code)
Example Language: C
static nonvolatile password_tries = NUM_RETRIES;
do
while (password_tries == 0) ; // Hang here if no more password tries
while (true)password_tries--; // Put retry code here to catch partial retries password_ok = 0; for (i = 0; i < NUM_PW_DIGITS; i++)
if (GetPasswordByte() == stored_password([i])
end
password_ok |= 0x10; // Power consumption here
else
password_ok |= 0x01; // is now the same here
if ((password_ok & 1) == 0)
password_tries = NUM_RETRIES;
break_to_Ok_to_proceed // Password OK Example 2 This code demonstrates the transfer of a secret key using Serial-In/Serial-Out shift. It's easy to extract the secret using simple power analysis as each shift gives data on a single bit of the key. (bad code)
Example Language: Verilog
module siso(clk,rst,a,q);
input a;
endmoduleinput clk,rst; output q; reg q; always@(posedge clk,posedge rst) begin
if(rst==1'b1)
end
q<1'b0;
else
q<a;
This code demonstrates the transfer of a secret key using a Parallel-In/Parallel-Out shift. In a parallel shift, data confounded by multiple bits of the key, not just one. (good code)
Example Language: Verilog
module pipo(clk,rst,a,q);
input clk,rst;
endmoduleinput[3:0]a; output[3:0]q; reg[3:0]q; always@(posedge clk,posedge rst) begin
if (rst==1'b1)
end
q<4'b0000;
else
q<a;
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE CATEGORY: Core and Compute Issues
Weaknesses in this category are typically associated with CPUs, Graphics, Vision, AI, FPGA, and microcontrollers.
CWE-1252: CPU Hardware Not Configured to Support Exclusivity of Write and Execute Operations
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe CPU is not configured to provide hardware support for exclusivity of write and execute operations on memory. This allows an attacker to execute data from all of memory.
CPUs provide a special bit that supports exclusivity of write and execute operations. This bit is used to segregate areas of memory to either mark them as code (instructions, which can be executed) or data (which should not be executed). In this way, if a user can write to a region of memory, the user cannot execute from that region and vice versa. This exclusivity provided by special hardware bit is leveraged by the operating system to protect executable space. While this bit is available in most modern processors by default, in some CPUs the exclusivity is implemented via a memory-protection unit (MPU) and memory-management unit (MMU) in which memory regions can be carved out with exact read, write, and execute permissions. However, if the CPU does not have an MMU/MPU, then there is no write exclusivity. Without configuring exclusivity of operations via segregated areas of memory, an attacker may be able to inject malicious code onto memory and later execute it. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 MCS51 Microcontroller (based on 8051) does not have a special bit to support write exclusivity. It also does not have an MMU/MPU support. The Cortex-M CPU has an optional MPU that supports up to 8 regions. (bad code)
Example Language: Other
The optional MPU is not configured.
If the MPU is not configured, then an attacker will be able to inject malicious data into memory and execute it. ![]()
CWE CATEGORY: Cross-Cutting Problems
Weaknesses in this category can arise in multiple areas of hardware design or can apply to a wide cross-section of components.
CWE-1279: Cryptographic Operations are run Before Supporting Units are Ready
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterPerforming cryptographic operations without ensuring that the supporting inputs are ready to supply valid data may compromise the cryptographic result.
Many cryptographic hardware units depend upon other hardware units to supply information to them to produce a securely encrypted result. For example, a cryptographic unit that depends on an external random-number-generator (RNG) unit for entropy must wait until the RNG unit is producing random numbers. If a cryptographic unit retrieves a private encryption key from a fuse unit, the fuse unit must be up and running before a key may be supplied.
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The following pseudocode illustrates the weak encryption resulting from the use of a pseudo-random-number generator output. (bad code)
Example Language: Pseudocode
If random_number_generator_self_test_passed() == TRUE
then Seed = get_random_number_from_RNG() else Seed = hardcoded_number In the example above, first a check of RNG ready is performed. If the check fails, the RNG is ignored and a hard coded value is used instead. The hard coded value severely weakens the encrypted output. (good code)
Example Language: Pseudocode
If random_number_generator_self_test_passed() == TRUE
then Seed = get_random_number_from_RNG() else enter_error_state() ![]()
CWE CATEGORY: Debug and Test Problems
Weaknesses in this category are related to hardware debug and test interfaces such as JTAG and scan chain.
CWE-1295: Debug Messages Revealing Unnecessary Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product fails to adequately prevent the revealing of unnecessary and potentially sensitive system information within debugging messages.
Debug messages are messages that help troubleshoot an issue by revealing the internal state of the system. For example, debug data in design can be exposed through internal memory array dumps or boot logs through interfaces like UART via TAP commands, scan chain, etc. Thus, the more information contained in a debug message, the easier it is to debug. However, there is also the risk of revealing information that could help an attacker either decipher a vulnerability, and/or gain a better understanding of the system. Thus, this extra information could lower the "security by obscurity" factor. While "security by obscurity" alone is insufficient, it can help as a part of "Defense-in-depth". ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example here shows how an attacker can take advantage of unnecessary information in debug messages. Example 1: Suppose in response to a Test Access Port (TAP) chaining request the debug message also reveals the current TAP hierarchy (the full topology) in addition to the success/failure message. Example 2: In response to a password-filling request, the debug message, instead of a simple Granted/Denied response, prints an elaborate message, "The user-entered password does not match the actual password stored in <directory name>." The result of the above examples is that the user is able to gather additional unauthorized information about the system from the debug messages. The solution is to ensure that Debug messages do not reveal additional details. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1273: Device Unlock Credential Sharing
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe credentials necessary for unlocking a device are shared across multiple parties and may expose sensitive information.
"Unlocking a device" often means activating certain unadvertised debug and manufacturer-specific capabilities of a device using sensitive credentials. Unlocking a device might be necessary for the purpose of troubleshooting device problems. For example, suppose a device contains the ability to dump the content of the full system memory by disabling the memory-protection mechanisms. Since this is a highly security-sensitive capability, this capability is "locked" in the production part. Unless the device gets unlocked by supplying the proper credentials, the debug capabilities are not available. For cases where the chip designer, chip manufacturer (fabricator), and manufacturing and assembly testers are all employed by the same company, the risk of compromise of the credentials is greatly reduced. However, the risk is greater when the chip designer is employed by one company, the chip manufacturer is employed by another company (a foundry), and the assemblers and testers are employed by yet a third company. Since these different companies will need to perform various tests on the device to verify correct device function, they all need to share the unlock key. Unfortunately, the level of secrecy and policy might be quite different at each company, greatly increasing the risk of sensitive credentials being compromised. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example shows how an attacker can take advantage of compromised credentials. (bad code)
Example Language: Other
Suppose a semiconductor chipmaker, "C", uses the foundry "F" for fabricating its chips. Now, F has many other customers in addition to C, and some of the other customers are much smaller companies. F has dedicated teams for each of its customers, but somehow it mixes up the unlock credentials and sends the unlock credentials of C to the wrong team. This other team does not take adequate precautions to protect the credentials that have nothing to do with them, and eventually the unlock credentials of C get leaked.
When the credentials of multiple organizations are stored together, exposure to third parties occurs frequently. (good code)
Example Language: Other
Vertical integration of a production company is one effective method of protecting sensitive credentials. Where vertical integration is not possible, strict access control and need-to-know are methods which can be implemented to reduce these risks.
![]()
Maintenance
This entry is still under development and will continue to see updates and content improvements.
CWE-1190: DMA Device Enabled Too Early in Boot Phase
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product enables a Direct Memory Access (DMA) capable device before the security configuration settings are established, which allows an attacker to extract data from or gain privileges on the product.
DMA is included in a number of devices because it allows data transfer between the computer and the connected device, using direct hardware access to read or write directly to main memory without any OS interaction. An attacker could exploit this to access secrets. Several virtualization-based mitigations have been introduced to thwart DMA attacks. These are usually configured/setup during boot time. However, certain IPs that are powered up before boot is complete (known as early boot IPs) may be DMA capable. Such IPs, if not trusted, could launch DMA attacks and gain access to assets that should otherwise be protected. ![]()
![]() ![]()
![]()
![]()
![]()
![]()
CWE-1431: Driving Intermediate Cryptographic State/Results to Hardware Module Outputs
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a hardware module implementing a cryptographic
algorithm that writes sensitive information about the intermediate
state or results of its cryptographic operations via one of its output
wires (typically the output port containing the final result).
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The following SystemVerilog code is a crypto module that takes input data and encrypts it by processing the data through multiple encryption rounds. Note: this example is derived from [REF-1469]. (bad code)
Example Language: Verilog
01 | module crypto_core_with_leakage
02 | (
03 | input clk,
08 | );04 | input rst, 05 | input [127:0] data_i, 06 | output [127:0] data_o, 07 | output valid 09 | 10 | localparam int total_rounds = 10; 11 | logic [3:0] round_id_q; 12 | logic [127:0] data_state_q, data_state_d; 13 | logic [127:0] key_state_q, key_state_d; 14 | 15 | crypto_algo_round u_algo_round (
16 | .clk (clk),
23 | );17 | .rst (rst), 18 | .round_i (round_id_q ), 19 | .key_i (key_state_q ), 20 | .data_i (data_state_q), 21 | .key_o (key_state_d ), 22 | .data_o (data_state_d) 24 | 25 | always @(posedge clk) begin
26 | if (rst) begin
46 | end
27 | data_state_q <= 0;
30 | end28 | key_state_q <= 0; 29 | round_id_q <= 0; 31 | else begin
32 | case (round_id_q)
45 | end
33 | total_rounds: begin
44 | endcase
34 | data_state_q <= 0;
37 | end35 | key_state_q <= 0; 36 | round_id_q <= 0; 38 | 39 | default: begin
40 | data_state_q <= data_state_d;
43 | end41 | key_state_q <= key_state_d; 42 | round_id_q <= round_id_q + 1; 47 | 48 | assign valid = (round_id_q == total_rounds) ? 1'b1 : 1'b0; 49 | 50 | assign data_o = data_state_q; 51 | 52 | endmodule In line 50 above, data_state_q is assigned to data_o. Since data_state_q contains intermediate state/results, this allows an attacker to obtain these results through data_o.
In line 50 of the fixed logic below, while "data_state_q" does not contain the final result, a "sanitizing" mechanism drives a safe default value (i.e., 0) to "data_o" instead of the value of "data_state_q". In doing so, the mechanism prevents the exposure of intermediate state/results which could be used to break soundness of the cryptographic operation being performed. A real-world example of this weakness and mitigation can be seen in a pull request that was submitted to the OpenTitan Github repository [REF-1469]. (good code)
Example Language: Verilog
01 | module crypto_core_without_leakage
02 | (
03 | input clk,
09 |04 | input rst, 05 | input [127:0] data_i, 06 | output [127:0] data_o, 07 | output valid 08 | ); 10 | localparam int total_rounds = 10; 11 | logic [3:0] round_id_q; 12 | logic [127:0] data_state_q, data_state_d; 13 | logic [127:0] key_state_q, key_state_d; 14 | 15 | crypto_algo_round u_algo_round (
16 | .clk (clk),
23 | );17 | .rst (rst), 18 | .round_i (round_id_q ), 19 | .key_i (key_state_q ), 20 | .data_i (data_state_q), 21 | .key_o (key_state_d ), 22 | .data_o (data_state_d) 24 | 25 | always @(posedge clk) begin
26 | if (rst) begin
46 | end
27 | data_state_q <= 0;
30 | end28 | key_state_q <= 0; 29 | round_id_q <= 0; 31 | else begin
32 | case (round_id_q)
45 | end
33 | total_rounds: begin
44 | endcase
34 | data_state_q <= 0;
37 | end35 | key_state_q <= 0; 36 | round_id_q <= 0; 38 | 39 | default: begin
40 | data_state_q <= data_state_d;
43 | end41 | key_state_q <= key_state_d; 42 | round_id_q <= round_id_q + 1; 47 | 48 | assign valid = (round_id_q == total_rounds) ? 1'b1 : 1'b0; 49 | 50 | assign data_o = (valid) ? data_state_q : 0; 51 | 52 | endmodule
![]()
CWE-440: Expected Behavior Violation
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom Filter![]()
![]() ![]()
![]()
![]()
![]()
![]()
Example 1 The provided code is extracted from the Control and Status Register (CSR), csr_regfile, module within the Hack@DAC'21 OpenPiton System-on-Chip (SoC). This module is designed to implement CSR registers in accordance with the RISC-V specification. The mie (machine interrupt enable) register is a 64-bit register [REF-1384], where bits correspond to different interrupt sources. As the name suggests, mie is a machine-level register that determines which interrupts are enabled. Note that in the example below the mie_q and mie_d registers represent the conceptual mie reigster in the RISC-V specification. The mie_d register is the value to be stored in the mie register while the mie_q register holds the current value of the mie register [REF-1385]. The mideleg (machine interrupt delegation) register, also 64-bit wide, enables the delegation of specific interrupt sources from machine privilege mode to lower privilege levels. By setting specific bits in the mideleg register, the handling of certain interrupts can be delegated to lower privilege levels without engaging the machine-level privilege mode. For example, in supervisor mode, the mie register is limited to a specific register called the sie (supervisor interrupt enable) register. If delegated, an interrupt becomes visible in the sip (supervisor interrupt pending) register and can be enabled or blocked using the sie register. If no delegation occurs, the related bits in sip and sie are set to zero. The sie register value is computed based on the current value of mie register, i.e., mie_q, and the mideleg register. (bad code)
Example Language: Verilog
module csr_regfile #(...)(...);
... // --------------------------- // CSR Write and update logic // --------------------------- ...
if (csr_we) begin
endmodule
unique case (csr_addr.address)
end...
riscv::CSR_SIE: begin
endcase
// the mideleg makes sure only delegate-able register
end//(and therefore also only implemented registers) are written mie_d = (mie_q & ~mideleg_q) | (csr_wdata & mideleg_q) | utval_q; ... The above code snippet illustrates an instance of a vulnerable implementation of the sie register update logic, where users can tamper with the mie_d register value through the utval (user trap value) register. This behavior violates the RISC-V specification. The code shows that the value of utval, among other signals, is used in updating the mie_d value within the sie update logic. While utval is a register accessible to users, it should not influence or compromise the integrity of sie. Through manipulation of the utval register, it becomes feasible to manipulate the sie register's value. This opens the door for potential attacks, as an adversary can gain control over or corrupt the sie value. Consequently, such manipulation empowers an attacker to enable or disable critical supervisor-level interrupts, resulting in various security risks such as privilege escalation or denial-of-service attacks. A fix to this issue is to remove the utval from the right-hand side of the assignment. That is the value of the mie_d should be updated as shown in the good code example [REF-1386]. (good code)
Example Language: Verilog
module csr_regfile #(...)(...);
... // --------------------------- // CSR Write and update logic // --------------------------- ...
if (csr_we) begin
endmodule
unique case (csr_addr.address)
end...
riscv::CSR_SIE: begin
endcase
// the mideleg makes sure only delegate-able register
end//(and therefore also only implemented registers) are written mie_d = (mie_q & ~mideleg_q) | (csr_wdata & mideleg_q); ... Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Theoretical
The behavior of an application that is not consistent with the expectations of the developer may lead to incorrect use of the software.
CWE-1422: Exposure of Sensitive Information caused by Incorrect Data Forwarding during Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA processor event or prediction may allow incorrect or stale data to
be forwarded to transient operations, potentially exposing data over a
covert channel.
Software may use a variety of techniques to preserve the confidentiality of private data that is accessible within the current processor context. For example, the memory safety and type safety properties of some high-level programming languages help to prevent software written in those languages from exposing private data. As a second example, software sandboxes may co-locate multiple users' software within a single process. The processor's Instruction Set Architecture (ISA) may permit one user's software to access another user's data (because the software shares the same address space), but the sandbox prevents these accesses by using software techniques such as bounds checking. If incorrect or stale data can be forwarded (for example, from a cache) to transient operations, then the operations' microarchitectural side effects may correspond to the data. If an attacker can trigger these transient operations and observe their side effects through a covert channel, then the attacker may be able to infer the data. For example, an attacker process may induce transient execution in a victim process that causes the victim to inadvertently access and then expose its private data via a covert channel. In the software sandbox example, an attacker sandbox may induce transient execution in its own code, allowing it to transiently access and expose data in a victim sandbox that shares the same address space. Consequently, weaknesses that arise from incorrect/stale data forwarding might violate users' expectations of software-based memory safety and isolation techniques. If the data forwarding behavior is not properly documented by the hardware vendor, this might violate the software vendor's expectation of how the hardware should behave. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Faulting loads in a victim domain may trigger incorrect transient forwarding, which leaves secret-dependent traces in the microarchitectural state. Consider this code sequence example from [REF-1391]. (bad code)
Example Language: C
void call_victim(size_t untrusted_arg) {
*arg_copy = untrusted_arg;
}array[**trusted_ptr * 4096]; A processor with this weakness will store the value of untrusted_arg (which may be provided by an attacker) to the stack, which is trusted memory. Additionally, this store operation will save this value in some microarchitectural buffer, for example, the store buffer. In this code sequence, trusted_ptr is dereferenced while the attacker forces a page fault. The faulting load causes the processor to mis-speculate by forwarding untrusted_arg as the (transient) load result. The processor then uses untrusted_arg for the pointer dereference. After the fault has been handled and the load has been re-issued with the correct argument, secret-dependent information stored at the address of trusted_ptr remains in microarchitectural state and can be extracted by an attacker using a vulnerable code sequence. Example 2 Some processors try to predict when a store will forward data to a subsequent load, even when the address of the store or the load is not yet known. For example, on Intel processors this feature is called a Fast Store Forwarding Predictor [REF-1392], and on AMD processors the feature is called Predictive Store Forwarding [REF-1393]. A misprediction can cause incorrect or stale data to be forwarded from a store to a load, as illustrated in the following code snippet from [REF-1393]: (bad code)
Example Language: C
void fn(int idx) {
unsigned char v;
}idx_array[0] = 4096; v = array[idx_array[idx] * (idx)]; In this example, assume that the parameter idx can only be 0 or 1, and assume that idx_array initially contains all 0s. Observe that the assignment to v in line 4 will be array[0], regardless of whether idx=0 or idx=1. Now suppose that an attacker repeatedly invokes fn with idx=0 to train the store forwarding predictor to predict that the store in line 3 will forward the data 4096 to the load idx_array[idx] in line 4. Then, when the attacker invokes fn with idx=1 the predictor may cause idx_array[idx] to transiently produce the incorrect value 4096, and therefore v will transiently be assigned the value array[4096], which otherwise would not have been accessible in line 4. Although this toy example is benign (it doesn't transmit array[4096] over a covert channel), an attacker may be able to use similar techniques to craft and train malicious code sequences to, for example, read data beyond a software sandbox boundary. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1423: Exposure of Sensitive Information caused by Shared Microarchitectural Predictor State that Influences Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterShared microarchitectural predictor state may allow code to influence
transient execution across a hardware boundary, potentially exposing
data that is accessible beyond the boundary over a covert channel.
Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components. When separate software components (for example, two processes) share microarchitectural predictor state across a hardware boundary, code in one component may be able to influence microarchitectural predictor behavior in another component. If the predictor can cause transient execution, the shared predictor state may allow an attacker to influence transient execution in a victim, and in a manner that could allow the attacker to infer private data from the victim by monitoring observable discrepancies (CWE-203) in a covert channel [REF-1400]. Predictor state may be shared when the processor transitions from one component to another (for example, when a process makes a system call to enter the kernel). Many commodity processors have features which prevent microarchitectural predictions that occur before a boundary from influencing predictions that occur after the boundary. Predictor state may also be shared between hardware threads, for example, sibling hardware threads on a processor that supports simultaneous multithreading (SMT). This sharing may be benign if the hardware threads are simultaneously executing in the same software component, or it could expose a weakness if one sibling is a malicious software component, and the other sibling is a victim software component. Processors that share microarchitectural predictors between hardware threads may have features which prevent microarchitectural predictions that occur on one hardware thread from influencing predictions that occur on another hardware thread. Features that restrict predictor state sharing across transitions or between hardware threads may be always-on, on by default, or may require opt-in from software. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Branch Target Injection (BTI) is a vulnerability that can allow an SMT hardware thread to maliciously train the indirect branch predictor state that is shared with its sibling hardware thread. A cross-thread BTI attack requires the attacker to find a vulnerable code sequence within the victim software. For example, the authors of [REF-1415] identified the following code sequence in the Windows library ntdll.dll: (bad code)
Example Language: x86 Assembly
adc edi,dword ptr [ebx+edx+13BE13BDh]
indirect_branch_site:adc dl,byte ptr [edi] ...
jmp dword ptr [rsi] # at this point attacker knows edx, controls edi and ebx
To successfully exploit this code sequence to disclose the victim's private data, the attacker must also be able to find an indirect branch site within the victim, where the attacker controls the values in edi and ebx, and the attacker knows the value in edx as shown above at the indirect branch site. A proof-of-concept cross-thread BTI attack might proceed as follows:
Example 2 BTI can also allow software in one execution context to maliciously train branch predictor entries that can be used in another context. For example, on some processors user-mode software may be able to train predictor entries that can also be used after transitioning into kernel mode, such as after invoking a system call. This vulnerability does not necessarily require SMT and may instead be performed in synchronous steps, though it does require the attacker to find an exploitable code sequence in the victim's code, for example, in the kernel. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1420: Exposure of Sensitive Information during Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA processor event or prediction may allow incorrect operations (or correct operations with incorrect data) to execute transiently, potentially exposing data over a covert channel.
When operations execute but do not commit to the processor's architectural state, this is commonly referred to as transient execution. This behavior can occur when the processor mis-predicts an outcome (such as a branch target), or when a processor event (such as an exception or microcode assist, etc.) is handled after younger operations have already executed. Operations that execute transiently may exhibit observable discrepancies (CWE-203) in covert channels [REF-1400] such as data caches. Observable discrepancies of this kind can be detected and analyzed using timing or power analysis techniques, which may allow an attacker to infer information about the operations that executed transiently. For example, the attacker may be able to infer confidential data that was accessed or used by those operations. Transient execution weaknesses may be exploited using one of two methods. In the first method, the attacker generates a code sequence that exposes data through a covert channel when it is executed transiently (the attacker must also be able to trigger transient execution). Some transient execution weaknesses can only expose data that is accessible within the attacker's processor context. For example, an attacker executing code in a software sandbox may be able to use a transient execution weakness to expose data within the same address space, but outside of the attacker's sandbox. Other transient execution weaknesses can expose data that is architecturally inaccessible, that is, data protected by hardware-enforced boundaries such as page tables or privilege rings. These weaknesses are the subject of CWE-1421. In the second exploitation method, the attacker first identifies a code sequence in a victim program that, when executed transiently, can expose data that is architecturally accessible within the victim's processor context. For instance, the attacker may search the victim program for code sequences that resemble a bounds-check bypass sequence (see Demonstrative Example 1). If the attacker can trigger a mis-prediction of the conditional branch and influence the index of the out-of-bounds array access, then the attacker may be able to infer the value of out-of-bounds data by monitoring observable discrepancies in a covert channel. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Secure programs perform bounds checking before accessing an array if the source of the array index is provided by an untrusted source such as user input. In the code below, data from array1 will not be accessed if x is out of bounds. The following code snippet is from [REF-1415]: (bad code)
Example Language: C
if (x < array1_size)
y = array2[array1[x] * 4096];
However, if this code executes on a processor that performs conditional branch prediction the outcome of the if statement could be mis-predicted and the access on the next line will occur with a value of x that can point to an out-of-bounds location (within the program's memory). Even though the processor does not commit the architectural effects of the mis-predicted branch, the memory accesses alter data cache state, which is not rolled back after the branch is resolved. The cache state can reveal array1[x] thereby providing a mechanism to recover the data value located at address array1 + x. Example 2 Some managed runtimes or just-in-time (JIT) compilers may overwrite recently executed code with new code. When the instruction pointer enters the new code, the processor may inadvertently execute the stale code that had been overwritten. This can happen, for instance, when the processor issues a store that overwrites a sequence of code, but the processor fetches and executes the (stale) code before the store updates memory. Similar to the first example, the processor does not commit the stale code's architectural effects, though microarchitectural side effects can persist. Hence, confidential information accessed or used by the stale code may be inferred via an observable discrepancy in a covert channel. This vulnerability is described in more detail in [REF-1427]. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1421: Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom Filter
A processor event may allow transient operations to access
architecturally restricted data (for example, in another address
space) in a shared microarchitectural structure (for example, a CPU
cache), potentially exposing the data over a covert channel.
Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components. Many commodity processors also share microarchitectural resources that cache (temporarily store) data, which may be confidential. These resources may be shared across processor contexts, including across SMT threads, privilege rings, or others. When transient operations allow access to ISA-protected data in a shared microarchitectural resource, this might violate users' expectations of the ISA feature that is bypassed. For example, if transient operations can access a victim's private data in a shared microarchitectural resource, then the operations' microarchitectural side effects may correspond to the accessed data. If an attacker can trigger these transient operations and observe their side effects through a covert channel [REF-1400], then the attacker may be able to infer the victim's private data. Private data could include sensitive program data, OS/VMM data, page table data (such as memory addresses), system configuration data (see Demonstrative Example 3), or any other data that the attacker does not have the required privileges to access. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Some processors may perform access control checks in parallel with memory read/write operations. For example, when a user-mode program attempts to read data from memory, the processor may also need to check whether the memory address is mapped into user space or kernel space. If the processor performs the access concurrently with the check, then the access may be able to transiently read kernel data before the check completes. This race condition is demonstrated in the following code snippet from [REF-1408], with additional annotations: (bad code)
Example Language: x86 Assembly
1 ; rcx = kernel address, rbx = probe array
2 xor rax, rax # set rax to 0 3 retry: 4 mov al, byte [rcx] # attempt to read kernel memory 5 shl rax, 0xc # multiply result by page size (4KB) 6 jz retry # if the result is zero, try again 7 mov rbx, qword [rbx + rax] # transmit result over a cache covert channel Vulnerable processors may return kernel data from a shared microarchitectural resource in line 4, for example, from the processor's L1 data cache. Since this vulnerability involves a race condition, the mov in line 4 may not always return kernel data (that is, whenever the check "wins" the race), in which case this demonstration code re-attempts the access in line 6. The accessed data is multiplied by 4KB, a common page size, to make it easier to observe via a cache covert channel after the transmission in line 7. The use of cache covert channels to observe the side effects of transient execution has been described in [REF-1408]. Example 2 Many commodity processors share microarchitectural fill buffers between sibling hardware threads on simultaneous multithreaded (SMT) processors. Fill buffers can serve as temporary storage for data that passes to and from the processor's caches. Microarchitectural Fill Buffer Data Sampling (MFBDS) is a vulnerability that can allow a hardware thread to access its sibling's private data in a shared fill buffer. The access may be prohibited by the processor's ISA, but MFBDS can allow the access to occur during transient execution, in particular during a faulting operation or an operation that triggers a microcode assist. More information on MFBDS can be found in [REF-1405] and [REF-1409]. Example 3 Some processors may allow access to system registers (for example, system coprocessor registers or model-specific registers) during transient execution. This scenario is depicted in the code snippet below. Under ordinary operating circumstances, code in exception level 0 (EL0) is not permitted to access registers that are restricted to EL1, such as TTBR0_EL1. However, on some processors an earlier mis-prediction can cause the MRS instruction to transiently read the value in an EL1 register. In this example, a conditional branch (line 2) can be mis-predicted as "not taken" while waiting for a slow load (line 1). This allows MRS (line 3) to transiently read the value in the TTBR0_EL1 register. The subsequent memory access (line 6) can allow the restricted register's value to become observable, for example, over a cache covert channel. Code snippet is from [REF-1410]. See also [REF-1411]. (bad code)
Example Language: x86 Assembly
1 LDR X1, [X2] ; arranged to miss in the cache 2 CBZ X1, over ; This will be taken 3 MRS X3, TTBR0_EL1; 4 LSL X3, X3, #imm 5 AND X3, X3, #0xFC0 6 LDR X5, [X6,X3] ; X6 is an EL0 base address 7 over Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1258: Exposure of Sensitive System Information Due to Uncleared Debug Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware does not fully clear security-sensitive values, such as keys and intermediate values in cryptographic operations, when debug mode is entered.
Security sensitive values, keys, intermediate steps of cryptographic operations, etc. are stored in temporary registers in the hardware. If these values are not cleared when debug mode is entered they may be accessed by a debugger allowing sensitive information to be accessible by untrusted parties. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A cryptographic core in a System-On-a-Chip (SoC) is used for cryptographic acceleration and implements several cryptographic operations (e.g., computation of AES encryption and decryption, SHA-256, HMAC, etc.). The keys for these operations or the intermediate values are stored in registers internal to the cryptographic core. These internal registers are in the Memory Mapped Input Output (MMIO) space and are blocked from access by software and other untrusted agents on the SoC. These registers are accessible through the debug and test interface. (bad code)
Example Language: Other
In the above scenario, registers that store keys and intermediate values of cryptographic operations are not cleared when system enters debug mode. An untrusted actor running a debugger may read the contents of these registers and gain access to secret keys and other sensitive cryptographic information.
(good code)
Example Language: Other
Whenever the chip enters debug mode, all registers containing security-sensitive data are be cleared rendering them unreadable.
Example 2
The following code example is extracted from the AES wrapper module, aes1_wrapper, of the Hack@DAC'21 buggy OpenPiton System-on-Chip (SoC). Within this wrapper module are four memory-mapped registers: core_key, core_key0, core_key1, and core_key2. Core_key0, core_key1, and core_key2 hold encryption/decryption keys. The core_key register selects a key and sends it to the underlying AES module to execute encryption/decryption operations. Debug mode in processors and SoCs facilitates design debugging by granting access to internal signal/register values, including physical pin values of peripherals/core, fabric bus data transactions, and inter-peripheral registers. Debug mode allows users to gather detailed, low-level information about the design to diagnose potential issues. While debug mode is beneficial for diagnosing processors or SoCs, it also introduces a new attack surface for potential attackers. For instance, if an attacker gains access to debug mode, they could potentially read any content transmitted through the fabric bus or access encryption/decryption keys stored in cryptographic peripherals. Therefore, it is crucial to clear the contents of secret registers upon entering debug mode. In the provided example of flawed code below, when debug_mode_i is activated, the register core_key0 is set to zero to prevent AES key leakage during debugging. However, this protective measure is not applied to the core_key1 register [REF-1435], leaving its contents uncleared during debug mode. This oversight enables a debugger to access sensitive information. Failing to clear sensitive data during debug mode may lead to unauthorized access to secret keys and compromise system security. (bad code)
Example Language: Verilog
module aes1_wrapper #(
...
assign core_key0 = debug_mode_i ? 'b0 : {
...
key_reg0[7],
assign core_key1 = { key_reg0[6], key_reg0[5], key_reg0[4], key_reg0[3], key_reg0[2], key_reg0[1], key_reg0[0]};
key_reg1[7],
key_reg1[6], key_reg1[5], key_reg1[4], key_reg1[3], key_reg1[2], key_reg1[1], key_reg1[0]}; endmodule To address the issue, it is essential to ensure that the register is cleared and zeroized after activating debug mode on the SoC. In the correct implementation illustrated in the good code below, core_keyx registers are set to zero when debug mode is activated [REF-1436]. (good code)
Example Language: Verilog
module aes1_wrapper #(
...
assign core_key0 = debug_mode_i ? 'b0 : {
...
key_reg0[7],
assign core_key1 = debug_mode_i ? 'b0 : { key_reg0[6], key_reg0[5], key_reg0[4], key_reg0[3], key_reg0[2], key_reg0[1], key_reg0[0]};
key_reg1[7],
key_reg1[6], key_reg1[5], key_reg1[4], key_reg1[3], key_reg1[2], key_reg1[1], key_reg1[0]}; endmodule Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1316: Fabric-Address Map Allows Programming of Unwarranted Overlaps of Protected and Unprotected Ranges
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe address map of the on-chip fabric has protected and unprotected regions overlapping, allowing an attacker to bypass access control to the overlapping portion of the protected region.
Various ranges can be defined in the system-address map, either in the memory or in Memory-Mapped-IO (MMIO) space. These ranges are usually defined using special range registers that contain information, such as base address and size. Address decoding is the process of determining for which range the incoming transaction is destined. To ensure isolation, ranges containing secret data are access-control protected. Occasionally, these ranges could overlap. The overlap could either be intentional (e.g. due to a limited number of range registers or limited choice in choosing size of the range) or unintentional (e.g. introduced by errors). Some hardware designs allow dynamic remapping of address ranges assigned to peripheral MMIO ranges. In such designs, intentional address overlaps can be created through misconfiguration by malicious software. When protected and unprotected ranges overlap, an attacker could send a transaction and potentially compromise the protections in place, violating the principle of least privilege. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 An on-chip fabric supports a 64KB address space that is memory-mapped. The fabric has two range registers that support creation of two protected ranges with specific size constraints--4KB, 8KB, 16KB or 32KB. Assets that belong to user A require 4KB, and those of user B require 20KB. Registers and other assets that are not security-sensitive require 40KB. One range register is configured to program 4KB to protect user A's assets. Since a 20KB range cannot be created with the given size constraints, the range register for user B's assets is configured as 32KB. The rest of the address space is left as open. As a result, some part of untrusted and open-address space overlaps with user B range. The fabric does not support least privilege, and an attacker can send a transaction to the overlapping region to tamper with user B data. Since range B only requires 20KB but is allotted 32KB, there is 12KB of reserved space. Overlapping this region of user B data, where there are no assets, with the untrusted space will prevent an attacker from tampering with user B data. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1209: Failure to Disable Reserved Bits
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe reserved bits in a hardware design are not disabled prior to production. Typically, reserved bits are used for future capabilities and should not support any functional logic in the design. However, designers might covertly use these bits to debug or further develop new capabilities in production hardware. Adversaries with access to these bits will write to them in hopes of compromising hardware state.
Reserved bits are labeled as such so they can be allocated for a later purpose. They are not to do anything in the current design. However, designers might want to use these bits to debug or control/configure a future capability to help minimize time to market (TTM). If the logic being controlled by these bits is still enabled in production, an adversary could use the logic to induce unwanted/unsupported behavior in the hardware. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Assume a hardware Intellectual Property (IP) has address space 0x0-0x0F for its configuration registers, with the last one labeled reserved (i.e. 0x0F). Therefore inside the Finite State Machine (FSM), the code is as follows: (bad code)
Example Language: Verilog
reg gpio_out = 0; //gpio should remain low for normal operation
case (register_address)
4'b1111 : //0x0F
begin
gpio_out = 1;
end
An adversary may perform writes to reserved address space in hopes of changing the behavior of the hardware. In the code above, the GPIO pin should remain low for normal operation. However, it can be asserted by accessing the reserved address space (0x0F). This may be a concern if the GPIO state is being used as an indicator of health (e.g. if asserted the hardware may respond by shutting down or resetting the system, which may not be the correct action the system should perform). In the code below, the condition "register_address = 0X0F" is commented out, and a default is provided that will catch any values of register_address not explicitly accounted for and take no action with regards to gpio_out. This means that an attacker who is able to write 0X0F to register_address will not enable any undocumented "features" in the process. (good code)
Example Language: Verilog
reg gpio_out = 0; //gpio should remain low for normal operation
case (register_address)
//4'b1111 : //0x0F
default: gpio_out = gpio_out; ![]()
CWE-1277: Firmware Not Updateable
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product does not provide its
users with the ability to update or patch its
firmware to address any vulnerabilities or
weaknesses that may be present.
Without the ability to
patch or update firmware, consumers will be
left vulnerable to exploitation of any known
vulnerabilities, or any vulnerabilities that
are discovered in the future. This can expose
consumers to permanent risk throughout the
entire lifetime of the device, which could be
years or decades. Some external protective
measures and mitigations might be employed to
aid in preventing or reducing the risk of
malicious attack, but the root weakness cannot
be corrected.
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A refrigerator has an Internet interface for the official purpose of alerting the manufacturer when that refrigerator detects a fault. Because the device is attached to the Internet, the refrigerator is a target for hackers who may wish to use the device other potentially more nefarious purposes. (bad code)
Example Language: Other
The refrigerator has no means of patching and is hacked becoming a spewer of email spam.
(good code)
Example Language: Other
The device automatically patches itself and provides considerable more protection against being hacked.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Terminology
The "firmware" term does not have a single commonly-shared definition, so there may be variations in how this CWE entry is interpreted during mapping.
CWE CATEGORY: General Circuit and Logic Design Concerns
Weaknesses in this category are related to hardware-circuit design and logic (e.g., CMOS transistors, finite state machines, and registers) as well as issues related to hardware description languages such as System Verilog and VHDL.
CWE-1270: Generation of Incorrect Security Tokens
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens generated in the system are incorrect.
Systems-On-a-Chip (SoC) (Integrated circuits and hardware engines) implement Security Tokens to differentiate and identify actions originated from various agents. These actions could be "read", "write", "program", "reset", "fetch", "compute", etc. Security Tokens are generated and assigned to every agent on the SoC that is either capable of generating an action or receiving an action from another agent. Every agent could be assigned a unique, Security Token based on its trust level or privileges. Incorrectly generated Security Tokens could result in the same token used for multiple agents or multiple tokens being used for the same agent. This condition could result in a Denial-of-Service (DoS) or the execution of an action that in turn could result in privilege escalation or unintended access. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider a system with a register for storing an AES key for encryption or decryption. The key is 128 bits long implemented as a set of four 32-bit registers. The key registers are assets, and register, AES_KEY_ACCESS_POLICY, is defined to provide necessary access controls. The access-policy register defines which agents, using a Security Token, may access the AES-key registers. Each bit in this 32-bit register is used to define a Security Token. There could be a maximum of 32 Security Tokens that are allowed access to the AES-key registers. When set (bit = "1") bit number allows action from an agent whose identity matches that bit number. If Clear (bit = "0") the action is disallowed for the corresponding agent. Assume the system has two agents: a Main-controller and an Aux-controller. The respective Security Tokens are "1" and "2".
An agent with a Security Token "1" has access to AES_ENC_DEC_KEY_0 through AES_ENC_DEC_KEY_3 registers. As per the above access policy, the AES-Key-access policy allows access to the AES-key registers if the security Token is "1". (bad code)
Example Language: Other
The SoC incorrectly generates Security Token "1" for every agent. In other words, both Main-controller and Aux-controller are assigned Security Token "1".
Both agents have access to the AES-key registers. (good code)
Example Language: Other
The SoC should correctly generate Security Tokens, assigning "1" to the Main-controller and "2" to the Aux-controller
![]()
CWE-1313: Hardware Allows Activation of Test or Debug Logic at Runtime
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDuring runtime, the hardware allows for test or debug logic (feature) to be activated, which allows for changing the state of the hardware. This feature can alter the intended behavior of the system and allow for alteration and leakage of sensitive data by an adversary.
An adversary can take advantage of test or debug logic that is made accessible through the hardware during normal operation to modify the intended behavior of the system. For example, an accessible Test/debug mode may allow read/write access to any system data. Using error injection (a common test/debug feature) during a transmit/receive operation on a bus, data may be modified to produce an unintended message. Similarly, confidentiality could be compromised by such features allowing access to secrets. ![]()
![]() ![]()
![]()
![]()
![]()
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1276: Hardware Child Block Incorrectly Connected to Parent System
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterSignals between a hardware IP and the parent system design are incorrectly connected causing security risks.
Individual hardware IP must communicate with the parent system in order for the product to function correctly and as intended. If implemented incorrectly, while not causing any apparent functional issues, may cause security issues. For example, if the IP should only be reset by a system-wide hard reset, but instead the reset input is connected to a software-triggered debug mode reset (which is also asserted during a hard reset), integrity of data inside the IP can be violated. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Many SoCs use hardware to partition system resources between trusted and un-trusted entities. One example of this concept is the Arm TrustZone, in which the processor and all security-aware IP attempt to isolate resources based on the status of a privilege bit. This privilege bit is part of the input interface in all TrustZone-aware IP. If this privilege bit is accidentally grounded or left unconnected when the IP is instantiated, privilege escalation of all input data may occur. (bad code)
Example Language: Verilog
// IP definition
module tz_peripheral(clk, reset, data_in, data_in_security_level, ...);
input clk, reset;
endmoduleinput [31:0] data_in; input data_in_security_level; ... // Instantiation of IP in a parent system module soc(...)
...
endmoduletz_peripheral u_tz_peripheral(
.clk(clk),
);.rst(rst), .data_in(rdata), //Copy-and-paste error or typo grounds data_in_security_level (in this example 0=secure, 1=non-secure) effectively promoting all data to "secure") .data_in_security_level(1'b0), ... In the Verilog code below, the security level input to the TrustZone aware peripheral is correctly driven by an appropriate signal instead of being grounded. (good code)
Example Language: Verilog
// Instantiation of IP in a parent system
module soc(...)
...
endmoduletz_peripheral u_tz_peripheral(
.clk(clk),
);.rst(rst), .data_in(rdata), // This port is no longer grounded, but instead driven by the appropriate signal .data_in_security_level(rdata_security_level), ... Example 2 Here is a code snippet from the Ariane core module in the HACK@DAC'21 Openpiton SoC [REF-1362]. To ensure full functional correctness, developers connect the ports with names. However, in some cases developers forget to connect some of these ports to the desired signals in the parent module. These mistakes by developers can lead to incorrect functional behavior or, in some cases, introduce security vulnerabilities. (bad code)
Example Language: Verilog
...
csr_regfile #(
...
) csr_regfile_i (
.flush_o ( flush_csr_ctrl ),
);.halt_csr_o ( halt_csr_ctrl ), ... .irq_i(), .time_irq_i(), .* ... In the above example from HACK@DAC'21, since interrupt signals are not properly connected, the CSR module will fail to send notifications in the event of interrupts. Consequently, critical information in CSR registers that should be flushed or modified in response to an interrupt won't be updated. These vulnerabilities can potentially result in information leakage across various privilege levels. To address the aforementioned vulnerability, developers must follow a two-step approach. First, they should ensure that all module signals are properly connected. This can often be facilitated using automated tools, and many simulators and sanitizer tools issue warnings when a signal remains unconnected or floats. Second, it is imperative to validate that the signals connected to a module align with the specifications. In the provided example, the developer should establish the correct connection of interrupt signals from the parent module (Ariane core) to the child module (csr_regfile) [REF-1363]. (good code)
Example Language: Verilog
...
csr_regfile #(
...
) csr_regfile_i (
.flush_o ( flush_csr_ctrl ),
);.halt_csr_o ( halt_csr_ctrl ), ... .irq_i (irq_i), .time_irq_i (time_irq_i), .* ... ![]()
CWE-1234: Hardware Internal or Debug Modes Allow Override of Locks
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDevice configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification. This is commonly implemented using a trusted lock bit, which when set, disables writes to a protected set of registers or address regions. The lock protection is intended to prevent modification of certain system configuration (e.g., memory/memory protection unit configuration). If debug features supported by hardware or internal modes/system states are supported in the hardware design, modification of the lock protection may be allowed allowing access and modification of configuration information. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
For example, consider the example Locked_override_register example. This register module supports a lock mode that blocks any writes after lock is set to 1.
(bad code)
Example Language: Verilog
module Locked_register_example
( input [15:0] Data_in, input Clk, input resetn, input write, input Lock, input scan_mode, input debug_unlocked, output reg [15:0] Data_out ); reg lock_status; always @(posedge Clk or negedge resetn)
if (~resetn) // Register is reset resetn
always @(posedge Clk or negedge resetn)begin
lock_status <= 1'b0;
endelse if (Lock) begin
lock_status <= 1'b1;
endelse if (~Lock) begin
lock_status <= lock_status
end
if (~resetn) // Register is reset resetn
endmodulebegin
Data_out <= 16'h0000;
endelse if (write & (~lock_status | scan_mode | debug_unlocked) ) // Register protected by Lock bit input, overrides supported for scan_mode & debug_unlocked begin
Data_out <= Data_in;
endelse if (~write) begin
Data_out <= Data_out;
endIf either the scan_mode or the debug_unlocked modes can be triggered by software, then the lock protection may be bypassed. (good code)
Either remove the debug and scan mode overrides or protect enabling of these modes so that only trusted and authorized users may enable these modes.
Example 2 The following example code [REF-1375] is taken from the register lock security peripheral of the HACK@DAC'21 buggy OpenPiton SoC. It demonstrates how to lock read or write access to security-critical hardware registers (e.g., crypto keys, system integrity code, etc.). The configuration to lock all the sensitive registers in the SoC is managed through the reglk_mem registers. These reglk_mem registers are reset when the hardware powers up and configured during boot up. Malicious users, even with kernel-level software privilege, do not get access to the sensitive contents that are locked down. Hence, the security of the entire system can potentially be compromised if the register lock configurations are corrupted or if the register locks are disabled. (bad code)
Example Language: Verilog
...
always @(posedge clk_i)
begin
...
if(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'h0;
endThe example code [REF-1375] illustrates an instance of a vulnerable implementation of register locks in the SoC. In this flawed implementation [REF-1375], the reglk_mem registers are also being reset when the system enters debug mode (indicated by the jtag_unlock signal). Consequently, users can simply put the processor in debug mode to access sensitive contents that are supposed to be protected by the register lock feature. This can be mitigated by excluding debug mode signals from the reset logic of security-critical register locks as demonstrated in the following code snippet [REF-1376]. (good code)
Example Language: Verilog
...
always @(posedge clk_i)
begin
...
if(~(rst_ni && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'h0;
end![]()
CWE-1298: Hardware Logic Contains Race Conditions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA race condition in the hardware logic results in undermining security guarantees of the system.
A race condition in logic circuits typically occurs when a logic gate gets inputs from signals that have traversed different paths while originating from the same source. Such inputs to the gate can change at slightly different times in response to a change in the source signal. This results in a timing error or a glitch (temporary or permanent) that causes the output to change to an unwanted state before settling back to the desired state. If such timing errors occur in access control logic or finite state machines that are implemented in security sensitive flows, an attacker might exploit them to circumvent existing protections. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The code below shows a 2x1 multiplexor using logic gates. Though the code shown below results in the minimum gate solution, it is disjoint and causes glitches. (bad code)
Example Language: Verilog
// 2x1 Multiplexor using logic-gates
module glitchEx(
input wire in0, in1, sel,
);output wire z wire not_sel; wire and_out1, and_out2; assign not_sel = ~sel; assign and_out1 = not_sel & in0; assign and_out2 = sel & in1; // Buggy line of code: assign z = and_out1 | and_out2; // glitch in signal z endmodule The buggy line of code, commented above, results in signal 'z' periodically changing to an unwanted state. Thus, any logic that references signal 'z' may access it at a time when it is in this unwanted state. This line should be replaced with the line shown below in the Good Code Snippet which results in signal 'z' remaining in a continuous, known, state. Reference for the above code, along with waveforms for simulation can be found in the references below. (good code)
Example Language: Verilog
assign z <= and_out1 or and_out2 or (in0 and in1);
This line of code removes the glitch in signal z. Example 2 The example code is taken from the DMA (Direct Memory Access) module of the buggy OpenPiton SoC of HACK@DAC'21. The DMA contains a finite-state machine (FSM) for accessing the permissions using the physical memory protection (PMP) unit. PMP provides secure regions of physical memory against unauthorized access. It allows an operating system or a hypervisor to define a series of physical memory regions and then set permissions for those regions, such as read, write, and execute permissions. When a user tries to access a protected memory area (e.g., through DMA), PMP checks the access of a PMP address (e.g., pmpaddr_i) against its configuration (pmpcfg_i). If the access violates the defined permissions (e.g., CTRL_ABORT), the PMP can trigger a fault or an interrupt. This access check is implemented in the pmp parametrized module in the below code snippet. The below code assumes that the state of the pmpaddr_i and pmpcfg_i signals will not change during the different DMA states (i.e., CTRL_IDLE to CTRL_DONE) while processing a DMA request (via dma_ctrl_reg). The DMA state machine is implemented using a case statement (not shown in the code snippet). (bad code)
Example Language: Verilog
module dma # (...)(...);
...
input [7:0] [16-1:0] pmpcfg_i;
endmodule
input logic [16-1:0][53:0] pmpaddr_i; ... //// Save the input command always @ (posedge clk_i or negedge rst_ni)
begin: save_inputs
if (!rst_ni)
begin
else... end
begin
end // save_inputs
if (dma_ctrl_reg == CTRL_IDLE || dma_ctrl_reg == CTRL_DONE)
endbegin ... end ... // Load/store PMP check pmp #(
.XLEN ( 64 ),
) i_pmp_data (.PMP_LEN ( 54 ), .NR_ENTRIES ( 16 )
.addr_i ( pmp_addr_reg ),
);.priv_lvl_i ( riscv::PRIV_LVL_U ), .access_type_i ( pmp_access_type_reg ), // Configuration .conf_addr_i ( pmpaddr_i ), .conf_i ( pmpcfg_i ), .allow_o ( pmp_data_allow ) However, the above code [REF-1394] allows the values of pmpaddr_i and pmpcfg_i to be changed through DMA's input ports. This causes a race condition and will enable attackers to access sensitive addresses that the configuration is not associated with. Attackers can initialize the DMA access process (CTRL_IDLE) using pmpcfg_i for a non-privileged PMP address (pmpaddr_i). Then during the loading state (CTRL_LOAD), attackers can replace the non-privileged address in pmpaddr_i with a privileged address without the requisite authorized access configuration. To fix this issue (see [REF-1395]), the value of the pmpaddr_i and pmpcfg_i signals should be stored in local registers (pmpaddr_reg and pmpcfg_reg at the start of the DMA access process and the pmp module should reference those registers instead of the signals directly. The values of the registers can only be updated at the start (CTRL_IDLE) or the end (CTRL_DONE) of the DMA access process, which prevents attackers from changing the PMP address in the middle of the DMA access process. (good code)
Example Language: Verilog
module dma # (...)(...);
...
input [7:0] [16-1:0] pmpcfg_i;
endmodule
input logic [16-1:0][53:0] pmpaddr_i; ... reg [7:0] [16-1:0] pmpcfg_reg; reg [16-1:0][53:0] pmpaddr_reg; ... //// Save the input command always @ (posedge clk_i or negedge rst_ni)
begin: save_inputs
if (!rst_ni)
begin
else ... pmpaddr_reg <= 'b0 ; pmpcfg_reg <= 'b0 ; end
begin
end // save_inputs
if (dma_ctrl_reg == CTRL_IDLE || dma_ctrl_reg == CTRL_DONE)
end begin ... pmpaddr_reg <= pmpaddr_i; pmpcfg_reg <= pmpcfg_i; end ... // Load/store PMP check pmp #(
.XLEN ( 64 ),
) i_pmp_data (.PMP_LEN ( 54 ), .NR_ENTRIES ( 16 )
.addr_i ( pmp_addr_reg ),
);.priv_lvl_i ( riscv::PRIV_LVL_U ), // we intend to apply filter on // DMA always, so choose the least privilege .access_type_i ( pmp_access_type_reg ), // Configuration .conf_addr_i ( pmpaddr_reg ), .conf_i ( pmpcfg_reg ), .allow_o ( pmp_data_allow ) ![]()
CWE-1264: Hardware Logic with Insecure De-Synchronization between Control and Data Channels
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware logic for error handling and security checks can incorrectly forward data before the security check is complete.
Many high-performance on-chip bus protocols and processor data-paths employ separate channels for control and data to increase parallelism and maximize throughput. Bugs in the hardware logic that handle errors and security checks can make it possible for data to be forwarded before the completion of the security checks. If the data can propagate to a location in the hardware observable to an attacker, loss of data confidentiality can occur. 'Meltdown' is a concrete example of how de-synchronization between data and permissions checking logic can violate confidentiality requirements. Data loaded from a page marked as privileged was returned to the cpu regardless of current privilege level for performance reasons. The assumption was that the cpu could later remove all traces of this data during the handling of the illegal memory access exception, but this assumption was proven false as traces of the secret data were not removed from the microarchitectural state. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 There are several standard on-chip bus protocols used in modern SoCs to allow communication between components. There are a wide variety of commercially available hardware IP implementing the interconnect logic for these protocols. A bus connects components which initiate/request communications such as processors and DMA controllers (bus masters) with peripherals which respond to requests. In a typical system, the privilege level or security designation of the bus master along with the intended functionality of each peripheral determine the security policy specifying which specific bus masters can access specific peripherals. This security policy (commonly referred to as a bus firewall) can be enforced using separate IP/logic from the actual interconnect responsible for the data routing. (bad code)
Example Language: Other
The firewall and data routing logic becomes de-synchronized due to a hardware logic bug allowing components that should not be allowed to communicate to share data. For example, consider an SoC with two processors. One is being used as a root of trust and can access a cryptographic key storage peripheral. The other processor (application cpu) may run potentially untrusted code and should not access the key store. If the application cpu can issue a read request to the key store which is not blocked due to de-synchronization of data routing and the bus firewall, disclosure of cryptographic keys is possible.
(good code)
Example Language: Other
All data is correctly buffered inside the interconnect until the firewall has determined that the endpoint is allowed to receive the data.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Maintenance
As of CWE 4.9, members of the CWE Hardware SIG are closely analyzing this entry and others to improve CWE's coverage of transient execution weaknesses, which include issues related to Spectre, Meltdown, and other attacks. Additional investigation may include other weaknesses related to microarchitectural state. As a result, this entry might change significantly in CWE 4.10.
CWE-1257: Improper Access Control Applied to Mirrored or Aliased Memory Regions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterAliased or mirrored memory regions in hardware designs may have inconsistent read/write permissions enforced by the hardware. A possible result is that an untrusted agent is blocked from accessing a memory region but is not blocked from accessing the corresponding aliased memory region.
Hardware product designs often need to implement memory protection features that enable privileged software to define isolated memory regions and access control (read/write) policies. Isolated memory regions can be defined on different memory spaces in a design (e.g. system physical address, virtual address, memory mapped IO). Each memory cell should be mapped and assigned a system address that the core software can use to read/write to that memory. It is possible to map the same memory cell to multiple system addresses such that read/write to any of the aliased system addresses would be decoded to the same memory cell. This is commonly done in hardware designs for redundancy and simplifying address decoding logic. If one of the memory regions is corrupted or faulty, then that hardware can switch to using the data in the mirrored memory region. Memory aliases can also be created in the system address map if the address decoder unit ignores higher order address bits when mapping a smaller address region into the full system address. A common security weakness that can exist in such memory mapping is that aliased memory regions could have different read/write access protections enforced by the hardware such that an untrusted agent is blocked from accessing a memory address but is not blocked from accessing the corresponding aliased memory address. Such inconsistency can then be used to bypass the access protection of the primary memory block and read or modify the protected memory. An untrusted agent could also possibly create memory aliases in the system address map for malicious purposes if it is able to change the mapping of an address region or modify memory region sizes. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
In a System-on-a-Chip (SoC) design the system fabric uses 16 bit addresses. An IP unit (Unit_A) has 4 kilobyte of internal memory which is mapped into a 16 kilobyte address range in the system fabric address map.
To protect the register controls in Unit_A unprivileged software is blocked from accessing addresses between 0x0000 - 0x0FFF. The address decoder of Unit_A masks off the higher order address bits and decodes only the lower 12 bits for computing the offset into the 4 kilobyte internal memory space. (bad code)
Example Language: Other
In this design the aliased memory address ranges are these: 0x0000 - 0x0FFF 0x1000 - 0x1FFF 0x2000 - 0x2FFF 0x3000 - 0x3FFF The same register can be accessed using four different addresses: 0x0000, 0x1000, 0x2000, 0x3000. The system address filter only blocks access to range 0x0000 - 0x0FFF and does not block access to the aliased addresses in 0x1000 - 0x3FFF range. Thus, untrusted software can leverage the aliased memory addresses to bypass the memory protection. (good code)
Example Language: Other
In this design the aliased memory addresses (0x1000 - 0x3FFF) could be blocked from all system software access since they are not used by software. Alternately, the MPU logic can be changed to apply the memory protection policies to the full address range mapped to Unit_A (0x0000 - 0x3FFF). ![]()
CWE-1262: Improper Access Control for Register Interface
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers.
Software commonly accesses peripherals in a System-on-Chip (SoC) or other device through a memory-mapped register interface. Malicious software could tamper with any security-critical hardware data that is accessible directly or indirectly through the register interface, which could lead to a loss of confidentiality and integrity. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The register interface provides software access to hardware functionality. This functionality is an attack surface. This attack surface may be used to run untrusted code on the system through the register interface. As an example, cryptographic accelerators require a mechanism for software to select modes of operation and to provide plaintext or ciphertext data to be encrypted or decrypted as well as other functions. This functionality is commonly provided through registers. (bad code)
Cryptographic key material stored in registers inside the cryptographic accelerator can be accessed by software.
(good code)
Key material stored in registers should never be accessible to software. Even if software can provide a key, all read-back paths to software should be disabled.
Example 2 The example code is taken from the Control/Status Register (CSR) module inside the processor core of the HACK@DAC'19 buggy CVA6 SoC [REF-1340]. In RISC-V ISA [REF-1341], the CSR file contains different sets of registers with different privilege levels, e.g., user mode (U), supervisor mode (S), hypervisor mode (H), machine mode (M), and debug mode (D), with different read-write policies, read-only (RO) and read-write (RW). For example, machine mode, which is the highest privilege mode in a RISC-V system, registers should not be accessible in user, supervisor, or hypervisor modes. (bad code)
Example Language: Verilog
if (csr_we || csr_read) begin
if ((riscv::priv_lvl_t'(priv_lvl_o & csr_addr.csr_decode.priv_lvl) != csr_addr.csr_decode.priv_lvl) && !(csr_addr.address==riscv::CSR_MEPC)) begin
end
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
endcsr_exception_o.valid = 1'b1; // check access to debug mode only CSRs if (csr_addr_i[11:4] == 8'h7b && !debug_mode_q) begin
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
end
csr_exception_o.valid = 1'b1; The vulnerable example code allows the machine exception program counter (MEPC) register to be accessed from a user mode program by excluding the MEPC from the access control check. MEPC as per the RISC-V specification can be only written or read by machine mode code. Thus, the attacker in the user mode can run code in machine mode privilege (privilege escalation). To mitigate the issue, fix the privilege check so that it throws an Illegal Instruction Exception for user mode accesses to the MEPC register. [REF-1345] (good code)
Example Language: Verilog
if (csr_we || csr_read) begin
if ((riscv::priv_lvl_t'(priv_lvl_o & csr_addr.csr_decode.priv_lvl) != csr_addr.csr_decode.priv_lvl)) begin
end
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
endcsr_exception_o.valid = 1'b1; // check access to debug mode only CSRs if (csr_addr_i[11:4] == 8'h7b && !debug_mode_q) begin
csr_exception_o.cause = riscv::ILLEGAL_INSTR;
end
csr_exception_o.valid = 1'b1; Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1274: Improper Access Control for Volatile Memory Containing Boot Code
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product conducts a secure-boot process that transfers bootloader code from Non-Volatile Memory (NVM) into Volatile Memory (VM), but it does not have sufficient access control or other protections for the Volatile Memory.
Adversaries could bypass the secure-boot process and execute their own untrusted, malicious boot code. As a part of a secure-boot process, the read-only-memory (ROM) code for a System-on-Chip (SoC) or other system fetches bootloader code from Non-Volatile Memory (NVM) and stores the code in Volatile Memory (VM), such as dynamic, random-access memory (DRAM) or static, random-access memory (SRAM). The NVM is usually external to the SoC, while the VM is internal to the SoC. As the code is transferred from NVM to VM, it is authenticated by the SoC's ROM code. If the volatile-memory-region protections or access controls are insufficient to prevent modifications from an adversary or untrusted agent, the secure boot may be bypassed or replaced with the execution of an adversary's code. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A typical SoC secure boot's flow includes fetching the next piece of code (i.e., the boot loader) from NVM (e.g., serial, peripheral interface (SPI) flash), and transferring it to DRAM/SRAM volatile, internal memory, which is more efficient. (bad code)
Example Language: Other
The volatile-memory protections or access controls are insufficient.
The memory from where the boot loader executes can be modified by an adversary. (good code)
Example Language: Other
A good architecture should define appropriate protections or access controls to prevent modification by an adversary or untrusted agent, once the bootloader is authenticated.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1317: Improper Access Control in Fabric Bridge
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a fabric bridge for transactions between two Intellectual Property (IP) blocks, but the bridge does not properly perform the expected privilege, identity, or other access control checks between those IP blocks.
In hardware designs, different IP blocks are connected through interconnect-bus fabrics (e.g. AHB and OCP). Within a System on Chip (SoC), the IP block subsystems could be using different bus protocols. In such a case, the IP blocks are then linked to the central bus (and to other IP blocks) through a fabric bridge. Bridges are used as bus-interconnect-routing modules that link different protocols or separate, different segments of the overall SoC interconnect. For overall system security, it is important that the access-control privileges associated with any fabric transaction are consistently maintained and applied, even when they are routed or translated by a fabric bridge. A bridge that is connected to a fabric without security features forwards transactions to the slave without checking the privilege level of the master and results in a weakness in SoC access-control security. The same weakness occurs if a bridge does not check the hardware identity of the transaction received from the slave interface of the bridge. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example is from CVE-2019-6260 [REF-1138]. The iLPC2AHB bridge connects a CPU (with multiple, privilege levels, such as user, super user, debug, etc.) over AHB interface to an LPC bus. Several peripherals are connected to the LPC bus. The bridge is expected to check the privilege level of the transactions initiated in the core before forwarding them to the peripherals on the LPC bus. The bridge does not implement the checks and allows reads and writes from all privilege levels. To address this, designers should implement hardware-based checks that are either hardcoded to block untrusted agents from accessing secure peripherals or implement firmware flows that configure the bridge to block untrusted agents from making arbitrary reads or writes. Example 2 The example code below is taken from the AES and core local interrupt (CLINT) peripherals of the HACK@DAC'21 buggy OpenPiton SoC. The access to all the peripherals for a given privilege level of the processor is controlled by an access control module in the SoC. This ensures that malicious users with insufficient privileges do not get access to sensitive data, such as the AES keys used by the operating system to encrypt and decrypt information. The security of the entire system will be compromised if the access controls are incorrectly enforced. The access controls are enforced through the interconnect-bus fabrics, where access requests with insufficient access control permissions will be rejected. (bad code)
Example Language: Verilog
...
module aes0_wrapper #(...)(...); ...
input logic acct_ctrl_i;
...
axi_lite_interface #(...
...) axi_lite_interface_i ( ...
.en_o ( en_acct ),
..);
assign en = en_acct && acct_ctrl_i;
...endmodule ... module clint #(...)(...); ...
axi_lite_interface #(...
...) axi_lite_interface_i ( ...
.en_o ( en ),
); ... endmodule The previous code snippet [REF-1382] illustrates an instance of a vulnerable implementation of access control for the CLINT peripheral (see module clint). It also shows a correct implementation of access control for the AES peripheral (see module aes0_wrapper) [REF-1381]. An enable signal (en_o) from the fabric's AXI interface (present in both modules) is used to determine if an access request is made to the peripheral. In the case of the AES peripheral, this en_o signal is first received in a temporary signal en_acct. Then, the access request is enabled (by asserting the en signal) only if the request has sufficient access permissions (i.e., acct_ctrl_i signal should be enabled). However, in the case of the CLINT peripheral, the enable signal, en_o, from the AXI interface, is directly used to enable accesses. As a result, users with insufficient access permissions also get full access to the CLINT peripheral. To fix this, enable access requests to CLINT [REF-1383] only if the user has sufficient access as indicated by the acct_ctrl_i signal in the boolean && with en_acct. (good code)
Example Language: Verilog
module clint #(...
) ( ...
input logic acct_ctrl_i,
...);
logic en, en_acct;
...
axi_lite_interface #(...
...) axi_lite_interface_i (
.en_o ( en_acct ),
...
);
...assign en = en_acct && acct_ctrl_i; endmodule Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1245: Improper Finite State Machines (FSMs) in Hardware Logic
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterFaulty finite state machines (FSMs) in the hardware logic allow an attacker to put the system in an undefined state, to cause a denial of service (DoS) or gain privileges on the victim's system.
The functionality and security of the system heavily depend on the implementation of FSMs. FSMs can be used to indicate the current security state of the system. Lots of secure data operations and data transfers rely on the state reported by the FSM. Faulty FSM designs that do not account for all states, either through undefined states (left as don't cares) or through incorrect implementation, might lead an attacker to drive the system into an unstable state from which the system cannot recover without a reset, thus causing a DoS. Depending on what the FSM is used for, an attacker might also gain additional privileges to launch further attacks and compromise the security guarantees. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The Finite State Machine (FSM) shown in the "bad" code snippet below assigns the output ("out") based on the value of state, which is determined based on the user provided input ("user_input"). (bad code)
Example Language: Verilog
module fsm_1(out, user_input, clk, rst_n);
input [2:0] user_input; input clk, rst_n; output reg [2:0] out; reg [1:0] state; always @ (posedge clk or negedge rst_n ) begin
endmodule
if (!rst_n)
end
state = 3'h0;
elsecase (user_input)
3'h0:
endcase
3'h1: 3'h2: 3'h3: state = 2'h3; 3'h4: state = 2'h2; 3'h5: state = 2'h1; out <= {1'h1, state};
The case statement does not include a default to handle the scenario when the user provides inputs of 3'h6 and 3'h7. Those inputs push the system to an undefined state and might cause a crash (denial of service) or any other unanticipated outcome. Adding a default statement to handle undefined inputs mitigates this issue. This is shown in the "Good" code snippet below. The default statement is in bold. (good code)
Example Language: Verilog
case (user_input)
3'h0:
endcase3'h1: 3'h2: 3'h3: state = 2'h3; 3'h4: state = 2'h2; 3'h5: state = 2'h1; default: state = 2'h0; ![]()
CWE-1332: Improper Handling of Faults that Lead to Instruction Skips
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device is missing or incorrectly implements circuitry or sensors that detect and mitigate the skipping of security-critical CPU instructions when they occur.
The operating conditions of hardware may change in ways that cause unexpected behavior to occur, including the skipping of security-critical CPU instructions. Generally, this can occur due to electrical disturbances or when the device operates outside of its expected conditions. In practice, application code may contain conditional branches that are security-sensitive (e.g., accepting or rejecting a user-provided password). These conditional branches are typically implemented by a single conditional branch instruction in the program binary which, if skipped, may lead to effectively flipping the branch condition - i.e., causing the wrong security-sensitive branch to be taken. This affects processes such as firmware authentication, password verification, and other security-sensitive decision points. Attackers can use fault injection techniques to alter the operating conditions of hardware so that security-critical instructions are skipped more frequently or more reliably than they would in a "natural" setting. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A smart card contains authentication credentials that are used as authorization to enter a building. The credentials are only accessible when a correct PIN is presented to the card. (bad code)
Example Language: Other
The card emits the credentials when a voltage anomaly is injected into the power line to the device at a particular time after providing an incorrect PIN to the card, causing the internal program to accept the incorrect PIN.
There are several ways this weakness could be fixed. (good code)
Example Language: Other
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1351: Improper Handling of Hardware Behavior in Exceptionally Cold Environments
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA hardware device, or the firmware running on it, is
missing or has incorrect protection features to maintain
goals of security primitives when the device is cooled below
standard operating temperatures.
The hardware designer may improperly anticipate hardware behavior when exposed to exceptionally cold conditions. As a result they may introduce a weakness by not accounting for the modified behavior of critical components when in extreme environments. An example of a change in behavior is that power loss won't clear/reset any volatile state when cooled below standard operating temperatures. This may result in a weakness when the starting state of the volatile memory is being relied upon for a security decision. For example, a Physical Unclonable Function (PUF) may be supplied as a security primitive to improve confidentiality, authenticity, and integrity guarantees. However, when the PUF is paired with DRAM, SRAM, or another temperature sensitive entropy source, the system designer may introduce weakness by failing to account for the chosen entropy source's behavior at exceptionally low temperatures. In the case of DRAM and SRAM, when power is cycled at low temperatures, the device will not contain the bitwise biasing caused by inconsistencies in manufacturing and will instead contain the data from previous boot. Should the PUF primitive be used in a cryptographic construction which does not account for full adversary control of PUF seed data, weakness would arise. This weakness does not cover "Cold Boot Attacks" wherein RAM or other external storage is super cooled and read externally by an attacker. ![]()
![]() ![]()
![]()
![]()
![]()
![]()
CWE-1260: Improper Handling of Overlap Between Protected Memory Ranges
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product allows address regions to overlap, which can result in the bypassing of intended memory protection.
Isolated memory regions and access control (read/write) policies are used by hardware to protect privileged software. Software components are often allowed to change or remap memory region definitions in order to enable flexible and dynamically changeable memory management by system software. If a software component running at lower privilege can program a memory address region to overlap with other memory regions used by software running at higher privilege, privilege escalation may be available to attackers. The memory protection unit (MPU) logic can incorrectly handle such an address overlap and allow the lower-privilege software to read or write into the protected memory region, resulting in privilege escalation attack. An address overlap weakness can also be used to launch a denial of service attack on the higher-privilege software memory regions. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
For example, consider a design with a 16-bit address that has two software privilege levels: Privileged_SW and Non_privileged_SW. To isolate the system memory regions accessible by these two privilege levels, the design supports three memory regions: Region_0, Region_1, and Region_2. Each region is defined by two 32 bit registers: its range and its access policy.
Certain bits of the access policy are defined symbolically as follows:
For any requests from software, an address-protection filter checks the address range and access policies for each of the three regions, and only allows software access if all three filters allow access. Consider the following goals for access control as intended by the designer:
The intention is that Non_privileged_SW cannot modify memory region and policies defined by Privileged_SW in Region_0 and Region_1. Thus, it cannot read or write the memory regions that Privileged_SW is using. (bad code)
Non_privileged_SW can program the Address_range register for Region_2 so that its address overlaps with the ranges defined by Region_0 or Region_1. Using this capability, it is possible for Non_privileged_SW to block any memory region from being accessed by Privileged_SW, i.e., Region_0 and Region_1. This design could be improved in several ways. (good code)
Ensure that software accesses to memory regions are only permitted if all three filters permit access. Additionally, the scheme could define a memory region priority to ensure that Region_2 (the memory region defined by Non_privileged_SW) cannot overlap Region_0 or Region_1 (which are used by Privileged_SW).
Example 2 The example code below is taken from the IOMMU controller module of the HACK@DAC'19 buggy CVA6 SoC [REF-1338]. The static memory map is composed of a set of Memory-Mapped Input/Output (MMIO) regions covering different IP agents within the SoC. Each region is defined by two 64-bit variables representing the base address and size of the memory region (XXXBase and XXXLength). In this example, we have 12 IP agents, and only 4 of them are called out for illustration purposes in the code snippets. Access to the AES IP MMIO region is considered privileged as it provides access to AES secret key, internal states, or decrypted data. (bad code)
Example Language: Verilog
...
localparam logic[63:0] PLICLength = 64'h03FF_FFFF;
localparam logic[63:0] UARTLength = 64'h0011_1000; localparam logic[63:0] AESLength = 64'h0000_1000; localparam logic[63:0] SPILength = 64'h0080_0000; ...
typedef enum logic [63:0] {
...
PLICBase = 64'h0C00_0000, UARTBase = 64'h1000_0000, AESBase = 64'h1010_0000, SPIBase = 64'h2000_0000, ... The vulnerable code allows the overlap between the protected MMIO region of the AES peripheral and the unprotected UART MMIO region. As a result, unprivileged users can access the protected region of the AES IP. In the given vulnerable example UART MMIO region starts at address 64'h1000_0000 and ends at address 64'h1011_1000 (UARTBase is 64'h1000_0000, and the size of the region is provided by the UARTLength of 64'h0011_1000). On the other hand, the AES MMIO region starts at address 64'h1010_0000 and ends at address 64'h1010_1000, which implies an overlap between the two peripherals' memory regions. Thus, any user with access to the UART can read or write the AES MMIO region, e.g., the AES secret key. To mitigate this issue, remove the overlapping address regions by decreasing the size of the UART memory region or adjusting memory bases for all the remaining peripherals. [REF-1339] (good code)
Example Language: Verilog
...
localparam logic[63:0] PLICLength = 64'h03FF_FFFF;
localparam logic[63:0] UARTLength = 64'h0000_1000; localparam logic[63:0] AESLength = 64'h0000_1000; localparam logic[63:0] SPILength = 64'h0080_0000; ...
typedef enum logic [63:0] {
...
PLICBase = 64'h0C00_0000, UARTBase = 64'h1000_0000, AESBase = 64'h1010_0000, SPIBase = 64'h2000_0000, ... Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1384: Improper Handling of Physical or Environmental Conditions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced.
Hardware products are typically only guaranteed to behave correctly within certain physical limits or environmental conditions. Such products cannot necessarily control the physical or external conditions to which they are subjected. However, the inability to handle such conditions can undermine a product's security. For example, an unexpected physical or environmental condition may cause the flipping of a bit that is used for an authentication decision. This unexpected condition could occur naturally or be induced artificially by an adversary. Physical or environmental conditions of concern are:
![]()
![]() ![]()
![]()
![]()
![]()
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1261: Improper Handling of Single Event Upsets
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterTechnology trends such as CMOS-transistor down-sizing, use of new materials, and system-on-chip architectures continue to increase the sensitivity of systems to soft errors. These errors are random, and their causes might be internal (e.g., interconnect coupling) or external (e.g., cosmic radiation). These soft errors are not permanent in nature and cause temporary bit flips known as single-event upsets (SEUs). SEUs are induced errors in circuits caused when charged particles lose energy by ionizing the medium through which they pass, leaving behind a wake of electron-hole pairs that cause temporary failures. If these failures occur in security-sensitive modules in a chip, it might compromise the security guarantees of the chip. For instance, these temporary failures could be bit flips that change the privilege of a regular user to root. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This is an example from [REF-1089]. See the reference for full details of this issue. Parity is error detecting but not error correcting. (bad code)
Example Language: Other
Due to single-event upsets, bits are flipped in memories. As a result, memory-parity checks fail, which results in restart and a temporary denial of service of two to three minutes.
(good code)
Example Language: Other
Using error-correcting codes could have avoided the restart caused by SEUs.
Example 2 In 2016, a security researcher, who was also a patient using a pacemaker, was on an airplane when a bit flip occurred in the pacemaker, likely due to the higher prevalence of cosmic radiation at such heights. The pacemaker was designed to account for bit flips and went into a default safe mode, which still forced the patient to go to a hospital to get it reset. The bit flip also inadvertently enabled the researcher to access the crash file, perform reverse engineering, and detect a hard-coded key. [REF-1101] ![]()
CWE-1192: Improper Identifier for IP Block used in System-On-Chip (SOC)
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe System-on-Chip (SoC) does not have unique, immutable identifiers for each of its components.
A System-on-Chip (SoC) comprises several components (IP) with varied trust requirements. It is required that each IP is identified uniquely and should distinguish itself from other entities in the SoC without any ambiguity. The unique secured identity is required for various purposes. Most of the time the identity is used to route a transaction or perform certain actions, including resetting, retrieving a sensitive information, and acting upon or on behalf of something else. There are several variants of this weakness:
![]()
![]() ![]()
![]()
![]()
![]()
![]()
CWE-1331: Improper Isolation of Shared Resources in Network On Chip (NoC)
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe Network On Chip (NoC) does not isolate or incorrectly isolates its on-chip-fabric and internal resources such that they are shared between trusted and untrusted agents, creating timing channels.
Typically, network on chips (NoC) have many internal resources that are shared between packets from different trust domains. These resources include internal buffers, crossbars and switches, individual ports, and channels. The sharing of resources causes contention and introduces interference between differently trusted domains, which poses a security threat via a timing channel, allowing attackers to infer data that belongs to a trusted agent. This may also result in introducing network interference, resulting in degraded throughput and latency. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider a NoC that implements a one-dimensional mesh network with four nodes. This supports two flows: Flow A from node 0 to node 3 (via node 1 and node 2) and Flow B from node 1 to node 2. Flows A and B share a common link between Node 1 and Node 2. Only one flow can use the link in each cycle. One of the masters to this NoC implements a cryptographic algorithm (RSA), and another master to the NoC is a core that can be exercised by an attacker. The RSA algorithm performs a modulo multiplication of two large numbers and depends on each bit of the secret key. The algorithm examines each bit in the secret key and only performs multiplication if the bit is 1. This algorithm is known to be prone to timing attacks. Whenever RSA performs multiplication, there is additional network traffic to the memory controller. One of the reasons for this is cache conflicts. Since this is a one-dimensional mesh, only one flow can use the link in each cycle. Also, packets from the attack program and the RSA program share the output port of the network-on-chip. This contention results in network interference, and the throughput and latency of one flow can be affected by the other flow's demand. (attack code)
The attacker runs a loop program on the core they control, and this causes a cache miss in every iteration for the RSA algorithm. Thus, by observing network-traffic bandwidth and timing, the attack program can determine when the RSA algorithm is doing a multiply operation (i.e., when the secret key bit is 1) and eventually extract the entire, secret key.
There may be different ways to fix this particular weakness. (good code)
Example Language: Other
Implement priority-based arbitration inside the NoC and have dedicated buffers or virtual channels for routing secret data from trusted agents.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1189: Improper Isolation of Shared Resources on System-on-a-Chip (SoC)
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe System-On-a-Chip (SoC) does not properly isolate shared resources between trusted and untrusted agents.
A System-On-a-Chip (SoC) has a lot of functionality, but it may have a limited number of pins or pads. A pin can only perform one function at a time. However, it can be configured to perform multiple different functions. This technique is called pin multiplexing. Similarly, several resources on the chip may be shared to multiplex and support different features or functions. When such resources are shared between trusted and untrusted agents, untrusted agents may be able to access the assets intended to be accessed only by the trusted agents. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the following SoC design. The Hardware Root of Trust (HRoT) local SRAM is memory mapped in the core{0-N} address space. The HRoT allows or disallows access to private memory ranges, thus allowing the sram to function as a mailbox for communication between untrusted and trusted HRoT partitions. We assume that the threat is from malicious software in the untrusted domain. We assume this software has access to the core{0-N} memory map and can be running at any privilege level on the untrusted cores. The capability of this threat in this example is communication to and from the mailbox region of SRAM modulated by the hrot_iface. To address this threat, information must not enter or exit the shared region of SRAM through hrot_iface when in secure or privileged mode. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1232: Improper Lock Behavior After Power State Transition
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterRegister lock bit protection disables changes to system configuration once the bit is set. Some of the protected registers or lock bits become programmable after power state transitions (e.g., Entry and wake from low power sleep modes) causing the system configuration to be changeable.
Devices may allow device configuration controls which need to be programmed after device power reset via a trusted firmware or software module (commonly set by BIOS/bootloader) and then locked from any further modification. This action is commonly implemented using a programmable lock bit, which, when set, disables writes to a protected set of registers or address regions. After a power state transition, the lock bit is set to unlocked. Some common weaknesses that can exist in such a protection scheme are that the lock gets cleared, the values of the protected registers get reset, or the lock become programmable. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
Consider the memory configuration settings of a system that uses DDR3 DRAM memory. Protecting the DRAM memory configuration from modification by software is required to ensure that system memory access control protections cannot be bypassed. This can be done by using lock bit protection that locks all of the memory configuration registers. The memory configuration lock can be set by the BIOS during the boot process. If such a system also supports a rapid power on mode like hibernate, the DRAM data must be saved to a disk before power is removed and restored back to the DRAM once the system powers back up and before the OS resumes operation after returning from hibernate. To support the hibernate transition back to the operating state, the DRAM memory configuration must be reprogrammed even though it was locked previously. As the hibernate resume does a partial reboot, the memory configuration could be altered before the memory lock is set. Functionally the hibernate resume flow requires a bypass of the lock-based protection. The memory configuration must be securely stored and restored by trusted system firmware. Lock settings and system configuration must be restored to the same state it was in before the device entered into the hibernate mode. Example 2
The example code below is taken from the register lock module (reglk_wrapper) of the Hack@DAC'21 buggy OpenPiton System-on-Chip (SoC). Upon powering on, most of the silicon registers are initially unlocked. However, critical resources must be configured and locked by setting the lock bit in a register. In this module, a set of six memory-mapped I/O registers (reglk_mem) is defined and maintained to control the access control of registers inside different peripherals in the SoC [REF-1432]. Each bit represents a register's read/write ability or sets of registers inside a peripheral. Setting improper lock values after system power transition or system rest would make a temporary window for the attackers to read unauthorized data, e.g., secret keys from the crypto engine, and write illegitimate data to critical registers, e.g., framework data. Furthermore, improper register lock values can also result in DoS attacks. In this faulty implementation, the locks are disabled, i.e., initialized to zero, at reset instead of setting them to their appropriate values [REF-1433]. Improperly initialized locks might allow unauthorized access to sensitive registers, compromising the system's security. (bad code)
Example Language: Verilog
module reglk_wrapper #(
...
always @(posedge clk_i)
begin
if(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
...
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'h0;
endTo resolve this issue, it is crucial to ensure that register locks are correctly initialized during the reset phase of the SoC. Correct initialization values should be established to maintain the system's integrity, security, and predictable behavior and allow for proper control of peripherals. The specifics of initializing register locks and their values depend on the SoC's design and the system's requirements; for example, access to all registers through the user privilege level should be locked at reset. To address the problem depicted in the bad code example [REF-1433], the default value for "reglk_mem" should be set to 32'hFFFFFFFF. This ensures that access to protected data is restricted during power state transition or after reset until the system state transition is complete and security procedures have properly configured the register locks. (good code)
Example Language: Verilog
module reglk_wrapper #(
...
always @(posedge clk_i)
begin
if(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
...
for (j=0; j < 6; j=j+1) begin
end
reglk_mem[j] <= 'hffffffff;
end![]()
CWE-1323: Improper Management of Sensitive Trace Data
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterTrace data collected from several sources on the
System-on-Chip (SoC) is stored in unprotected locations or
transported to untrusted agents.
To facilitate verification of complex System-on-Chip (SoC) designs, SoC integrators add specific IP blocks that trace the SoC's internal signals in real-time. This infrastructure enables observability of the SoC's internal behavior, validation of its functional design, and detection of hardware and software bugs. Such tracing IP blocks collect traces from several sources on the SoC including the CPU, crypto coprocessors, and on-chip fabrics. Traces collected from these sources are then aggregated inside trace IP block and forwarded to trace sinks, such as debug-trace ports that facilitate debugging by external hardware and software debuggers. Since these traces are collected from several security-sensitive sources, they must be protected against untrusted debuggers. If they are stored in unprotected memory, an untrusted software debugger can access these traces and extract secret information. Additionally, if security-sensitive traces are not tagged as secure, an untrusted hardware debugger might access them to extract confidential information. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 In a SoC, traces generated from sources include security-sensitive IP blocks such as CPU (with tracing information such as instructions executed and memory operands), on-chip fabric (e.g., memory-transfer signals, transaction type and destination, and on-chip-firewall-error signals), power-management IP blocks (e.g., clock- and power-gating signals), and cryptographic coprocessors (e.g., cryptographic keys and intermediate values of crypto operations), among other non-security-sensitive IP blocks including timers and other functional blocks. The collected traces are then forwarded to the debug and trace interface used by the external hardware debugger. (bad code)
Example Language: Other
The traces do
not have any privilege level attached to them. All
collected traces can be viewed by any debugger (i.e., SoC
designer, OEM debugger, or end user).
(good code)
Example Language: Other
Some of the
traces are SoC-design-house secrets, while some are OEM
secrets. Few are end-user secrets and the rest are
not security-sensitive. Tag all traces with the
appropriate, privilege level at the source. The bits
indicating the privilege level must be immutable in
their transit from trace source to the final, trace
sink. Debugger privilege level must be checked before
providing access to traces.
![]()
CWE-1263: Improper Physical Access Control
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product is designed with access restricted to certain information, but it does not sufficiently protect against an unauthorized actor with physical access to these areas.
Sections of a product intended to have restricted access may be inadvertently or intentionally rendered accessible when the implemented physical protections are insufficient. The specific requirements around how robust the design of the physical protection mechanism needs to be depends on the type of product being protected. Selecting the correct physical protection mechanism and properly enforcing it through implementation and manufacturing are critical to the overall physical security of the product.
![]()
![]() ![]()
![]()
![]()
![]()
![]()
Maintenance
This entry is still under development and will continue to see updates and content improvements.
CWE-1250: Improper Preservation of Consistency Between Independent Representations of Shared State
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product has or supports multiple distributed components or sub-systems that are each required to keep their own local copy of shared data - such as state or cache - but the product does not ensure that all local copies remain consistent with each other.
In highly distributed environments, or on systems with distinct physical components that operate independently, there is often a need for each component to store and update its own local copy of key data such as state or cache, so that all components have the same "view" of the overall system and operate in a coordinated fashion. For example, users of a social media service or a massively multiplayer online game might be using their own personal computers while also interacting with different physical hosts in a globally distributed service, but all participants must be able to have the same "view" of the world. Alternately, a processor's Memory Management Unit (MMU) might have "shadow" MMUs to distribute its workload, and all shadow MMUs are expected to have the same accessible ranges of memory. In such environments, it becomes critical for the product to ensure that this "shared state" is consistently modified across all distributed systems. If state is not consistently maintained across all systems, then critical transactions might take place out of order, or some users might not get the same data as other users. When this inconsistency affects correctness of operations, it can introduce vulnerabilities in mechanisms that depend on consistent state. ![]() ![]()
![]()
![]()
Example 1 Suppose a processor's Memory Management Unit (MMU) has 5 other shadow MMUs to distribute its workload for its various cores. Each MMU has the start address and end address of "accessible" memory. Any time this accessible range changes (as per the processor's boot status), the main MMU sends an update message to all the shadow MMUs. Suppose the interconnect fabric does not prioritize such "update" packets over other general traffic packets. This introduces a race condition. If an attacker can flood the target with enough messages so that some of those attack packets reach the target before the new access ranges gets updated, then the attacker can leverage this scenario. ![]()
Research Gap
Issues related to state and cache - creation,
preservation, and update - are a significant gap in
CWE that is expected to be addressed in future
versions. It likely has relationships to concurrency
and synchronization, incorrect behavior order, and
other areas that already have some coverage in CWE,
although the focus has typically been on independent
processes on the same operating system - not on
independent systems that are all a part of a larger
system-of-systems.
CWE-1231: Improper Prevention of Lock Bit Modification
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a trusted lock bit for restricting access to registers, address regions, or other resources, but the product does not prevent the value of the lock bit from being modified after it has been set.
In integrated circuits and hardware intellectual property (IP) cores, device configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification. This behavior is commonly implemented using a trusted lock bit. When set, the lock bit disables writes to a protected set of registers or address regions. Design or coding errors in the implementation of the lock bit protection feature may allow the lock bit to be modified or cleared by software after it has been set. Attackers might be able to unlock the system and features that the bit is intended to protect. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the example design below for a digital thermal sensor that detects overheating of the silicon and triggers system shutdown. The system critical temperature limit (CRITICAL_TEMP_LIMIT) and thermal sensor calibration (TEMP_SENSOR_CALIB) data have to be programmed by firmware, and then the register needs to be locked (TEMP_SENSOR_LOCK). (bad code)
Example Language: Other
In this example, note that if the system heats to critical temperature, the response of the system is controlled by the TEMP_HW_SHUTDOWN bit [1], which is not lockable. Thus, the intended security property of the critical temperature sensor cannot be fully protected, since software can misconfigure the TEMP_HW_SHUTDOWN register even after the lock bit is set to disable the shutdown response. (good code)
Example Language: Other
To fix this weakness, one could change the TEMP_HW_SHUTDOWN field to be locked by TEMP_SENSOR_LOCK.
Example 2 The following example code is a snippet from the register locks inside the buggy OpenPiton SoC of HACK@DAC'21 [REF-1350]. Register locks help prevent SoC peripherals' registers from malicious use of resources. The registers that can potentially leak secret data are locked by register locks. In the vulnerable code, the reglk_mem is used for locking information. If one of its bits toggle to 1, the corresponding peripheral's registers will be locked. In the context of the HACK@DAC System-on-Chip (SoC), it is pertinent to note the existence of two distinct categories of reset signals. First, there is a global reset signal denoted as "rst_ni," which possesses the capability to simultaneously reset all peripherals to their respective initial states. Second, we have peripheral-specific reset signals, such as "rst_9," which exclusively reset individual peripherals back to their initial states. The administration of these reset signals is the responsibility of the reset controller module. (bad code)
Example Language: Verilog
always @(posedge clk_i)
begin
endif(~(rst_ni && ~jtag_unlock && ~rst_9))
begin
for (j=0; j < 6; j=j+1) begin
endreglk_mem[j] <= 'h0;
... In the buggy SoC architecture during HACK@DAC'21, a critical issue arises within the reset controller module. Specifically, the reset controller can inadvertently transmit a peripheral reset signal to the register lock within the user privilege domain. This unintentional action can result in the reset of the register locks, potentially exposing private data from all other peripherals, rendering them accessible and readable. To mitigate the issue, remove the extra reset signal rst_9 from the register lock if condition. [REF-1351] (good code)
Example Language: Verilog
always @(posedge clk_i)
begin
endif(~(rst_ni && ~jtag_unlock))
begin
for (j=0; j < 6; j=j+1) begin
endreglk_mem[j] <= 'h0;
... Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1319: Improper Protection against Electromagnetic Fault Injection (EM-FI)
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device is susceptible to electromagnetic fault injection attacks, causing device internal information to be compromised or security mechanisms to be bypassed.
Electromagnetic fault injection may allow an attacker to locally and dynamically modify the signals (both internal and external) of an integrated circuit. EM-FI attacks consist of producing a local, transient magnetic field near the device, inducing current in the device wires. A typical EMFI setup is made up of a pulse injection circuit that generates a high current transient in an EMI coil, producing an abrupt magnetic pulse which couples to the target producing faults in the device, which can lead to:
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 In many devices, security related information is stored in fuses. These fuses are loaded into shadow registers at boot time. Disturbing this transfer phase with EM-FI can lead to the shadow registers storing erroneous values potentially resulting in reduced security. Colin O'Flynn has demonstrated an attack scenario which uses electro-magnetic glitching during booting to bypass security and gain read access to flash, read and erase access to shadow memory area (where the private password is stored). Most devices in the MPC55xx and MPC56xx series that include the Boot Assist Module (BAM) (a serial or CAN bootloader mode) are susceptible to this attack. In this paper, a GM ECU was used as a real life target. While the success rate appears low (less than 2 percent), in practice a success can be found within 1-5 minutes once the EMFI tool is setup. In a practical scenario, the author showed that success can be achieved within 30-60 minutes from a cold start. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Maintenance
This entry is attack-oriented and may require significant modification in future versions, or even deprecation. It is not clear whether there is really a design "mistake" that enables such attacks, so this is not necessarily a weakness and may be more appropriate for CAPEC.
CWE-1247: Improper Protection Against Voltage and Clock Glitches
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device.
A device might support features such as secure boot which are supplemented with hardware and firmware support. This involves establishing a chain of trust, starting with an immutable root of trust by checking the signature of the next stage (culminating with the OS and runtime software) against a golden value before transferring control. The intermediate stages typically set up the system in a secure state by configuring several access control settings. Similarly, security logic for exercising a debug or testing interface may be implemented in hardware, firmware, or both. A device needs to guard against fault attacks such as voltage glitches and clock glitches that an attacker may employ in an attempt to compromise the system. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Below is a representative snippet of C code that is part of the secure-boot flow. A signature of the runtime-firmware image is calculated and compared against a golden value. If the signatures match, the bootloader loads runtime firmware. If there is no match, an error halt occurs. If the underlying hardware executing this code does not contain any circuitry or sensors to detect voltage or clock glitches, an attacker might launch a fault-injection attack right when the signature check is happening (at the location marked with the comment), causing a bypass of the signature-checking process. (bad code)
Example Language: C
...
if (signature_matches) // <-Glitch Here {
load_runtime_firmware();
}else {
do_not_load_runtime_firmware();
}... After bypassing secure boot, an attacker can gain access to system assets to which the attacker should not have access. (good code)
If the underlying hardware detects a voltage or clock glitch, the information can be used to prevent the glitch from being successful.
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1320: Improper Protection for Outbound Error Messages and Alert Signals
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterUntrusted agents can disable alerts about signal conditions exceeding limits or the response mechanism that handles such alerts.
Hardware sensors are used to detect whether a device is operating within design limits. The threshold values for these limits are set by hardware fuses or trusted software such as a BIOS. Modification of these limits may be protected by hardware mechanisms. When device sensors detect out of bound conditions, alert signals may be generated for remedial action, which may take the form of device shutdown or throttling. Warning signals that are not properly secured may be disabled or used to generate spurious alerts, causing degraded performance or denial-of-service (DoS). These alerts may be masked by untrusted software. Examples of these alerts involve thermal and power sensor alerts. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
Consider a platform design where a Digital-Thermal Sensor (DTS) is used to monitor temperature and compare that output against a threshold value. If the temperature output equals or exceeds the threshold value, the DTS unit sends an alert signal to the processor. The processor, upon getting the alert, input triggers system shutdown. The alert signal is handled as a General-Purpose-I/O (GPIO) pin in input mode. (bad code)
Example Language: Other
The processor-GPIO controller exposes software-programmable controls that allow untrusted software to reprogram the state of the GPIO pin.
Reprogramming the state of the GPIO pin allows malicious software to trigger spurious alerts or to set the alert pin to a zero value so that thermal sensor alerts are not received by the processor. (good code)
Example Language: Other
The GPIO alert-signal pin is blocked from untrusted software access and is controlled only by trusted software, such as the System BIOS.
![]()
CWE-1300: Improper Protection of Physical Side Channels
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions.
An adversary could monitor and measure physical phenomena to detect patterns and make inferences, even if it is not possible to extract the information in the digital domain. Physical side channels have been well-studied for decades in the context of breaking implementations of cryptographic algorithms or other attacks against security features. These side channels may be easily observed by an adversary with physical access to the device, or using a tool that is in close proximity. If the adversary can monitor hardware operation and correlate its data processing with power, EME, and acoustic measurements, the adversary might be able to recover of secret keys and data. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider a device that checks a passcode to unlock the screen. (bad code)
Example Language: Other
As each character of
the PIN number is entered, a correct character
exhibits one current pulse shape while an
incorrect character exhibits a different current
pulse shape.
PIN numbers used to unlock a cell phone should not exhibit any characteristics about themselves. This creates a side channel. An attacker could monitor the pulses using an oscilloscope or other method. Once the first character is correctly guessed (based on the oscilloscope readings), they can then move to the next character, which is much more efficient than the brute force method of guessing every possible sequence of characters. (good code)
Example Language: Other
Rather than comparing
each character to the correct PIN value as it is
entered, the device could accumulate the PIN in a
register, and do the comparison all at once at the
end. Alternatively, the components for the
comparison could be modified so that the current
pulse shape is the same regardless of the
correctness of the entered
character.
Example 2 Consider the device vulnerability CVE-2021-3011, which affects certain microcontrollers [REF-1221]. The Google Titan Security Key is used for two-factor authentication using cryptographic algorithms. The device uses an internal secret key for this purpose and exchanges information based on this key for the authentication. If this internal secret key and the encryption algorithm were known to an adversary, the key function could be duplicated, allowing the adversary to masquerade as the legitimate user. (bad code)
Example Language: Other
The local method of extracting the secret key consists of plugging the key into a USB port and using electromagnetic (EM) sniffing tools and computers.
(good code)
Example Language: Other
Several solutions could have been considered by the manufacturer. For example, the manufacturer could shield the circuitry in the key or add randomized delays, indirect calculations with random values involved, or randomly ordered calculations to make extraction much more difficult. The manufacturer could use a combination of these techniques.
Example 3 The code snippet provided here is part of the modular exponentiation module found in the HACK@DAC'21 Openpiton System-on-Chip (SoC), specifically within the RSA peripheral [REF-1368]. Modular exponentiation, denoted as "a^b mod n," is a crucial operation in the RSA public/private key encryption. In RSA encryption, where 'c' represents ciphertext, 'm' stands for a message, and 'd' corresponds to the private key, the decryption process is carried out using this modular exponentiation as follows: m = c^d mod n, where 'n' is the result of multiplying two large prime numbers. (bad code)
Example Language: Verilog
...
module mod_exp
...
endmodule`UPDATE: begin
if (exponent_reg != 'd0) begin
...
if (exponent_reg[0])
result_reg <= result_next;
base_reg <= base_next;exponent_reg <= exponent_next; state <= `UPDATE; The vulnerable code shows a buggy implementation of binary exponentiation where it updates the result register (result_reg) only when the corresponding exponent bit (exponent_reg[0]) is set to 1. However, when this exponent bit is 0, the output register is not updated. It's important to note that this implementation introduces a physical power side-channel vulnerability within the RSA core. This vulnerability could expose the private exponent to a determined physical attacker. Such exposure of the private exponent could lead to a complete compromise of the private key. To address mitigation requirements, the developer can develop the module by minimizing dependency on conditions, particularly those reliant on secret keys. In situations where branching is unavoidable, developers can implement masking mechanisms to obfuscate the power consumption patterns exhibited by the module (see good code example). Additionally, certain algorithms, such as the Karatsuba algorithm, can be implemented as illustrative examples of side-channel resistant algorithms, as they necessitate only a limited number of branch conditions [REF-1369]. (good code)
Example Language: Verilog
...
module mod_exp
...
endmodule`UPDATE: begin
if (exponent_reg != 'd0) begin
...
if (exponent_reg[0]) begin
result_reg <= result_next;
end else begin
mask_reg <= result_next;
endbase_reg <= base_next; exponent_reg <= exponent_next; state <= `UPDATE; Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1338: Improper Protections Against Hardware Overheating
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA hardware device is missing or has inadequate protection features to prevent overheating.
Hardware, electrical circuits, and semiconductor silicon have thermal side effects, such that some of the energy consumed by the device gets dissipated as heat and increases the temperature of the device. For example, in semiconductors, higher-operating frequency of silicon results in higher power dissipation and heat. The leakage current in CMOS circuits increases with temperature, and this creates positive feedback that can result in thermal runaway and damage the device permanently. Any device lacking protections such as thermal sensors, adequate platform cooling, or thermal insulation is susceptible to attacks by malicious software that might deliberately operate the device in modes that result in overheating. This can be used as an effective denial of service (DoS) or permanent denial of service (PDoS) attack. Depending on the type of hardware device and its expected usage, such thermal overheating can also cause safety hazards and reliability issues. Note that battery failures can also cause device overheating but the mitigations and examples included in this submission cannot reliably protect against a battery failure. There can be similar weaknesses with lack of protection from attacks based on overvoltage or overcurrent conditions. However, thermal heat is generated by hardware operation and the device should implement protection from overheating. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Malicious software running on a core can execute instructions that consume maximum power or increase core frequency. Such a power-virus program could execute on the platform for an extended time to overheat the device, resulting in permanent damage. Execution core and platform do not support thermal sensors, performance throttling, or platform-cooling countermeasures to ensure that any software executing on the system cannot cause overheating past the maximum allowable temperature. The platform and SoC should have failsafe thermal limits that are enforced by thermal sensors that trigger critical temperature alerts when high temperature is detected. Upon detection of high temperatures, the platform should trigger cooling or shutdown automatically.
![]()
CWE-1259: Improper Restriction of Security Token Assignment
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe System-On-A-Chip (SoC) implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens are improperly protected.
Systems-On-A-Chip (Integrated circuits and hardware engines) implement Security Tokens to differentiate and identify which actions originated from which agent. These actions may be one of the directives: 'read', 'write', 'program', 'reset', 'fetch', 'compute', etc. Security Tokens are assigned to every agent in the System that is capable of generating an action or receiving an action from another agent. Multiple Security Tokens may be assigned to an agent and may be unique based on the agent's trust level or allowed privileges. Since the Security Tokens are integral for the maintenance of security in an SoC, they need to be protected properly. A common weakness afflicting Security Tokens is improperly restricting the assignment to trusted components. Consequently, an improperly protected Security Token may be able to be programmed by a malicious agent (i.e., the Security Token is mutable) to spoof the action as if it originated from a trusted agent.
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 For example, consider a system with a register for storing an AES key for encryption and decryption. The key is of 128 bits implemented as a set of four 32-bit registers. The key register assets have an associated control register, AES_KEY_ACCESS_POLICY, which provides the necessary access controls. This access-policy register defines which agents may engage in a transaction, and the type of transaction, with the AES-key registers. Each bit in this 32-bit register defines a security Token. There could be a maximum of 32 security Tokens that are allowed access to the AES-key registers. The number of the bit when set (i.e., "1") allows respective action from an agent whose identity matches the number of the bit and, if "0" (i.e., Clear), disallows the respective action to that corresponding agent. Let's assume the system has two agents: a Main-controller and an Aux-controller. The respective Security Tokens are "1" and "2".
An agent with Security Token "1" has access to AES_ENC_DEC_KEY_0 through AES_ENC_DEC_KEY_3 registers. As per the above access policy, the AES-Key-access policy allows access to the AES-key registers if the security Token is "1". (bad code)
Example Language: Other
The Aux-controller could program its Security Token to "1" from "2".
The SoC does not properly protect the Security Token of the agents, and, hence, the Aux-controller in the above example can spoof the transaction (i.e., send the transaction as if it is coming from the Main-controller to access the AES-Key registers) (good code)
Example Language: Other
The SoC needs to protect the Security Tokens. None of the agents in the SoC should have the ability to change the Security Token.
![]()
Maintenance
This entry is still under development and will continue to see updates and content improvements. Currently it is expressed as a general absence of a protection mechanism as opposed to a specific mistake, and the entry's name and description could be interpreted as applying to software.
CWE-1256: Improper Restriction of Software Interfaces to Hardware Features
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product provides software-controllable
device functionality for capabilities such as power and
clock management, but it does not properly limit
functionality that can lead to modification of
hardware memory or register bits, or the ability to
observe physical side channels.
It is frequently assumed that physical attacks such as fault injection and side-channel analysis require an attacker to have physical access to the target device. This assumption may be false if the device has improperly secured power management features, or similar features. For mobile devices, minimizing power consumption is critical, but these devices run a wide variety of applications with different performance requirements. Software-controllable mechanisms to dynamically scale device voltage and frequency and monitor power consumption are common features in today's chipsets, but they also enable attackers to mount fault injection and side-channel attacks without having physical access to the device. Fault injection attacks involve strategic manipulation of bits in a device to achieve a desired effect such as skipping an authentication step, elevating privileges, or altering the output of a cryptographic operation. Manipulation of the device clock and voltage supply is a well-known technique to inject faults and is cheap to implement with physical device access. Poorly protected power management features allow these attacks to be performed from software. Other features, such as the ability to write repeatedly to DRAM at a rapid rate from unprivileged software, can result in bit flips in other memory locations (Rowhammer, [REF-1083]). Side channel analysis requires gathering measurement traces of physical quantities such as power consumption. Modern processors often include power metering capabilities in the hardware itself (e.g., Intel RAPL) which if not adequately protected enable attackers to gather measurements necessary for performing side-channel attacks from software. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 This example considers the Rowhammer problem [REF-1083]. The Rowhammer issue was caused by a program in a tight loop writing repeatedly to a location to which the program was allowed to write but causing an adjacent memory location value to change. (bad code)
Example Language: Other
Continuously writing the same value to the same address causes the value of an adjacent location to change value.
Preventing the loop required to defeat the Rowhammer exploit is not always possible: (good code)
Example Language: Other
Redesign the RAM devices to reduce inter capacitive coupling making the Rowhammer exploit impossible.
While the redesign may be possible for new devices, a redesign is not possible in existing devices. There is also the possibility that reducing capacitance with a relayout would impact the density of the device resulting in a less capable, more costly device. Example 2 Suppose a hardware design implements a set of software-accessible registers for scaling clock frequency and voltage but does not control access to these registers. Attackers may cause register and memory changes and race conditions by changing the clock or voltage of the device under their control. Example 3 Consider the following SoC design. Security-critical settings for scaling clock frequency and voltage are available in a range of registers bounded by [PRIV_END_ADDR : PRIV_START_ADDR] in the tmcu.csr module in the HW Root of Trust. These values are writable based on the lock_bit register in the same module. The lock_bit is only writable by privileged software running on the tmcu. We assume that untrusted software running on any of the Core{0-N} processors has access to the input and output ports of the hrot_iface. If untrusted software can clear the lock_bit or write the clock frequency and voltage registers due to inadequate protection, a fault injection attack could be performed. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
CWE-1224: Improper Restriction of Write-Once Bit Fields
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware design control register "sticky bits" or write-once bit fields are improperly implemented, such that they can be reprogrammed by software.
Integrated circuits and hardware IP software programmable controls and settings are commonly stored in register circuits. These register contents have to be initialized at hardware reset to define default values that are hard coded in the hardware description language (HDL) code of the hardware unit. A common security protection method used to protect register settings from modification by software is to make the settings write-once or "sticky." This allows writing to such registers only once, whereupon they become read-only. This is useful to allow initial boot software to configure systems settings to secure values while blocking runtime software from modifying such hardware settings. Failure to implement write-once restrictions in hardware design can expose such registers to being re-programmed by software and written multiple times. For example, write-once fields could be implemented to only be write-protected if they have been set to value "1", wherein they would work as "write-1-once" and not "write-once". ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider the example design module system verilog code shown below. register_write_once_example module is an example of register that has a write-once field defined. Bit 0 field captures the write_once_status value. This implementation can be for a register that is defined by specification to be a write-once register, since the write_once_status field gets written by input data bit 0 on first write. (bad code)
Example Language: Verilog
module register_write_once_example
( input [15:0] Data_in, input Clk, input ip_resetn, input global_resetn, input write, output reg [15:0] Data_out ); reg Write_once_status; always @(posedge Clk or negedge ip_resetn)
if (~ip_resetn)
endmodulebegin
Data_out <= 16'h0000;
end Write_once_status <= 1'b0; else if (write & ~Write_once_status) begin
Data_out <= Data_in & 16'hFFFE;
endWrite_once_status <= Data_in[0]; // Input bit 0 sets Write_once_status else if (~write) begin
Data_out[15:1] <= Data_out[15:1];
end Data_out[0] <= Write_once_status; The above example only locks further writes if write_once_status bit is written to one. So it acts as write_1-Once instead of the write-once attribute. (good code)
Example Language: Verilog
module register_write_once_example
( input [15:0] Data_in, input Clk, input ip_resetn, input global_resetn, input write, output reg [15:0] Data_out ); reg Write_once_status; always @(posedge Clk or negedge ip_resetn)
if (~ip_resetn)
begin
Data_out <= 16'h0000;
end Write_once_status <= 1'b0; else if (write & ~Write_once_status) begin
Data_out <= Data_in & 16'hFFFE;
end Write_once_status <= 1'b1; // Write once status set on first write, independent of input else if (~write) begin
Data_out[15:1] <= Data_out[15:1];
end Data_out[0] <= Write_once_status; endmodule ![]()
CWE-1266: Improper Scrubbing of Sensitive Data from Decommissioned Device
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product does not properly provide a capability for the product administrator to remove sensitive data at the time the product is decommissioned. A scrubbing capability could be missing, insufficient, or incorrect.
When a product is decommissioned - i.e., taken out of service - best practices or regulatory requirements may require the administrator to remove or overwrite sensitive data first, i.e. "scrubbing." Improper scrubbing of sensitive data from a decommissioned device leaves that data vulnerable to acquisition by a malicious actor. Sensitive data may include, but is not limited to, device/manufacturer proprietary information, user/device credentials, network configurations, and other forms of sensitive data. ![]()
![]() ![]()
![]()
![]()
![]()
![]()
Maintenance
This entry is still under development and will continue to see updates and content improvements.
CWE-1315: Improper Setting of Bus Controlling Capability in Fabric End-point
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe bus controller enables bits in the fabric end-point to allow responder devices to control transactions on the fabric.
To support reusability, certain fabric interfaces and end points provide a configurable register bit that allows IP blocks connected to the controller to access other peripherals connected to the fabric. This allows the end point to be used with devices that function as a controller or responder. If this bit is set by default in hardware, or if firmware incorrectly sets it later, a device intended to be a responder on a fabric is now capable of controlling transactions to other devices and might compromise system security. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 A typical, phone platform consists of the main, compute core or CPU, a DRAM-memory chip, an audio codec, a baseband modem, a power-management-integrated circuit ("PMIC"), a connectivity (WiFi and Bluetooth) modem, and several other analog/RF components. The main CPU is the only component that can control transactions, and all the other components are responder-only devices. All the components implement a PCIe end-point to interface with the rest of the platform. The responder devices should have the bus-control-enable bit in the PCIe-end-point register set to 0 in hardware to prevent the devices from controlling transactions to the CPU or other peripherals. The audio-codec chip does not have the bus-controller-enable-register bit hardcoded to 0. There is no platform-firmware flow to verify that the bus-controller-enable bit is set to 0 in all responders. Audio codec can now master transactions to the CPU and other platform components. Potentially, it can modify assets in other platform components to subvert system security. Platform firmware includes a flow to check the configuration of bus-controller-enable bit in all responder devices. If this register bit is set on any of the responders, platform firmware sets it to 0. Ideally, the default value of this register bit should be hardcoded to 0 in RTL. It should also have access control to prevent untrusted entities from setting this bit to become bus controllers. ![]()
CWE-1311: Improper Translation of Security Attributes by Fabric Bridge
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe bridge incorrectly translates security attributes from either trusted to untrusted or from untrusted to trusted when converting from one fabric protocol to another.
A bridge allows IP blocks supporting different fabric protocols to be integrated into the system. Fabric end-points or interfaces usually have dedicated signals to transport security attributes. For example, HPROT signals in AHB, AxPROT signals in AXI, and MReqInfo and SRespInfo signals in OCP. The values on these signals are used to indicate the security attributes of the transaction. These include the immutable hardware identity of the controller initiating the transaction, privilege level, and type of transaction (e.g., read/write, cacheable/non-cacheable, posted/non-posted). A weakness can arise if the bridge IP block, which translates the signals from the protocol used in the IP block endpoint to the protocol used by the central bus, does not properly translate the security attributes. As a result, the identity of the initiator could be translated from untrusted to trusted or vice-versa. This could result in access-control bypass, privilege escalation, or denial of service. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
The bridge interfaces between OCP and AHB end points. OCP uses MReqInfo signal to indicate security attributes, whereas AHB uses HPROT signal to indicate the security attributes. The width of MReqInfo can be customized as needed. In this example, MReqInfo is 5-bits wide and carries the privilege level of the OCP controller. The values 5'h11, 5'h10, 5'h0F, 5'h0D, 5'h0C, 5'h0B, 5'h09, 5'h08, 5'h04, and 5'h02 in MReqInfo indicate that the request is coming from a privileged state of the OCP bus controller. Values 5'h1F, 5'h0E, and 5'h00 indicate untrusted, privilege state. Though HPROT is a 5-bit signal, we only consider the lower, two bits in this example. HPROT values 2'b00 and 2'b10 are considered trusted, and 2'b01 and 2'b11 are considered untrusted. The OCP2AHB bridge is expected to translate trusted identities on the controller side to trusted identities on the responder side. Similarly, it is expected to translate untrusted identities on the controller side to untrusted identities on the responder side. (bad code)
Example Language: Verilog
module ocp2ahb
(
ahb_hprot,
); ocp_mreqinfo output [1:0] ahb_hprot; // output is 2 bit signal for AHB HPROT input [4:0] ocp_mreqinfo; // input is 5 bit signal from OCP MReqInfo wire [6:0] p0_mreqinfo_o_temp; // OCP signal that transmits hardware identity of bus controller wire y; reg [1:0] ahb_hprot; // hardware identity of bus controller is in bits 5:1 of p0_mreqinfo_o_temp signal assign p0_mreqinfo_o_temp[6:0] = {1'b0, ocp_mreqinfo[4:0], y}; always @* begin
case (p0_mreqinfo_o_temp[4:2])
end
000: ahb_hprot = 2'b11; // OCP MReqInfo to AHB HPROT mapping
endcase001: ahb_hprot = 2'b00; 010: ahb_hprot = 2'b00; 011: ahb_hprot = 2'b01; 100: ahb_hprot = 2'b00; 101: ahb_hprot = 2'b00; 110: ahb_hprot = 2'b10; 111: ahb_hprot = 2'b00; endmodule Logic in the case statement only checks for MReqInfo bits 4:2, i.e., hardware-identity bits 3:1. When ocp_mreqinfo is 5'h1F or 5'h0E, p0_mreqinfo_o_temp[2] will be 1. As a result, untrusted IDs from OCP 5'h1F and 5'h0E get translated to trusted ahb_hprot values 2'b00. ![]()
CWE-1246: Improper Write Handling in Limited-write Non-Volatile Memories
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product does not implement or incorrectly implements wear leveling operations in limited-write non-volatile memories.
Non-volatile memories such as NAND Flash, EEPROM, etc. have individually erasable segments, each of which can be put through a limited number of program/erase or write cycles. For example, the device can only endure a limited number of writes, after which the device becomes unreliable. In order to wear out the cells in a uniform manner, non-volatile memory and storage products based on the above-mentioned technologies implement a technique called wear leveling. Once a set threshold is reached, wear leveling maps writes of a logical block to a different physical block. This prevents a single physical block from prematurely failing due to a high concentration of writes. If wear leveling is improperly implemented, attackers may be able to programmatically cause the storage to become unreliable within a much shorter time than would normally be expected. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 An attacker can render a memory line unusable by repeatedly causing a write to the memory line. Below is example code from [REF-1058] that the user can execute repeatedly to cause line failure. W is the maximum associativity of any cache in the system; S is the size of the largest cache in the system. (attack code)
Example Language: C++
// Do aligned alloc of (W+1) arrays each of size S
while(1) {
for (ii = 0; ii < W + 1; ii++)
}
array[ii].element[0]++;
Without wear leveling, the above attack will be successful. Simple randomization of blocks will not suffice as instead of the original physical block, the randomized physical block will be worn out. (good code)
Example Language: Other
Wear leveling must be used to even out writes to the device.
![]()
CWE-1239: Improper Zeroization of Hardware Register
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware product does not properly clear sensitive information from built-in registers when the user of the hardware block changes.
Hardware logic operates on data stored in registers local to the hardware block. Most hardware IPs, including cryptographic accelerators, rely on registers to buffer I/O, store intermediate values, and interface with software. The result of this is that sensitive information, such as passwords or encryption keys, can exist in locations not transparent to the user of the hardware logic. When a different entity obtains access to the IP due to a change in operating mode or conditions, the new entity can extract information belonging to the previous user if no mechanisms are in place to clear register contents. It is important to clear information stored in the hardware if a physical attack on the product is detected, or if the user of the hardware block changes. The process of clearing register contents in a hardware IP is referred to as zeroization in standards for cryptographic hardware modules such as FIPS-140-2 [REF-267].
![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Suppose a hardware IP for implementing an encryption routine works as expected, but it leaves the intermediate results in some registers that can be accessed. Exactly why this access happens is immaterial - it might be unintentional or intentional, where the designer wanted a "quick fix" for something. Example 2 The example code below [REF-1379] is taken from the SHA256 Interface/wrapper controller module of the HACK@DAC'21 buggy OpenPiton SoC. Within the wrapper module there are a set of 16 memory-mapped registers referenced data[0] to data[15]. These registers are 32 bits in size and are used to store the data received on the AXI Lite interface for hashing. Once both the message to be hashed and a request to start the hash computation are received, the values of these registers will be forwarded to the underlying SHA256 module for processing. Once forwarded, the values in these registers no longer need to be retained. In fact, if not cleared or overwritten, these sensitive values can be read over the AXI Lite interface, potentially compromising any previously confidential data stored therein. (bad code)
Example Language: Verilog
...
// Implement SHA256 I/O memory map interface
...// Write side always @(posedge clk_i)
begin
if(~(rst_ni && ~rst_3))
begin
startHash <= 0;
newMessage <= 0; data[0] <= 0; data[1] <= 0; data[2] <= 0; ... data[14] <= 0; data[15] <= 0; In the previous code snippet [REF-1379] there is the lack of a data clearance mechanism for the memory-mapped I/O registers after their utilization. These registers get cleared only when a reset condition is met. This condition is met when either the global negative-edge reset input signal (rst_ni) or the dedicated reset input signal for SHA256 peripheral (rst_3) is active. In other words, if either of these reset signals is true, the registers will be cleared. However, in cases where there is not a reset condition these registers retain their values until the next hash operation. It is during the time between an old hash operation and a new hash operation that that data is open to unauthorized disclosure. To correct the issue of data persisting between hash operations, the memory mapped I/O registers need to be cleared once the values written in these registers are propagated to the SHA256 module. This could be done for example by adding a new condition to zeroize the memory mapped I/O registers once the hash value is computed, i.e., hashValid signal asserted, as shown in the good code example below [REF-1380]. This fix will clear the memory-mapped I/O registers after the data has been provided as input to the SHA engine. (good code)
Example Language: Verilog
...
// Implement SHA256 I/O memory map interface
...// Write side always @(posedge clk_i)
begin
if(~(rst_ni && ~rst_3))
begin
else if(hashValid && ~hashValid_r)
startHash <= 0;
endnewMessage <= 0; data[0] <= 0; data[1] <= 0; data[2] <= 0; ... data[14] <= 0; data[15] <= 0;
begin
data[0] <= 0;
enddata[1] <= 0; data[2] <= 0; ... data[14] <= 0; data[15] <= 0; ![]()
CWE-1304: Improperly Preserved Integrity of Hardware Configuration State During a Power Save/Restore Operation
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product performs a power save/restore
operation, but it does not ensure that the integrity of
the configuration state is maintained and/or verified between
the beginning and ending of the operation.
Before powering down, the Intellectual Property (IP) saves current state (S) to persistent storage such as flash or always-on memory in order to optimize the restore operation. During this process, an attacker with access to the persistent storage may alter (S) to a configuration that could potentially modify privileges, disable protections, and/or cause damage to the hardware. If the IP does not validate the configuration state stored in persistent memory, upon regaining power or becoming operational again, the IP could be compromised through the activation of an unwanted/harmful configuration. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The following pseudo code demonstrates the power save/restore workflow which may lead to weakness through a lack of validation of the config state after restore. (bad code)
Example Language: C
void save_config_state()
{
void* cfg;
}cfg = get_config_state(); save_config_state(cfg); go_to_sleep(); void restore_config_state() {
void* cfg;
}cfg = get_config_file(); load_config_file(cfg); The following pseudo-code is the proper workflow for the integrity checking mitigation: (good code)
Example Language: C
void save_config_state()
{
void* cfg;
}void* sha; cfg = get_config_state(); save_config_state(cfg); // save hash(cfg) to trusted location sha = get_hash_of_config_state(cfg); save_hash(sha); go_to_sleep(); void restore_config_state() {
void* cfg;
}void* sha_1, sha_2; cfg = get_config_file(); // restore hash of config from trusted memory sha_1 = get_persisted_sha_value(); sha_2 = get_hash_of_config_state(cfg); if (sha_1 != sha_2)
assert_error_and_halt();
load_config_file(cfg); It must be noted that in the previous example of good pseudo code, the memory (where the hash of the config state is stored) must be trustworthy while the hardware is between the power save and restore states. ![]()
CWE-1242: Inclusion of Undocumented Features or Chicken Bits
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe device includes chicken bits or undocumented features that can create entry points for unauthorized actors.
A common design practice is to use undocumented bits on a device that can be used to disable certain functional security features. These bits are commonly referred to as "chicken bits". They can facilitate quick identification and isolation of faulty components, features that negatively affect performance, or features that do not provide the required controllability for debug and test. Another way to achieve this is through implementation of undocumented features. An attacker might exploit these interfaces for unauthorized access. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 Consider a device that comes with various security measures, such as secure boot. The secure-boot process performs firmware-integrity verification at boot time, and this code is stored in a separate SPI-flash device. However, this code contains undocumented "special access features" intended to be used only for performing failure analysis and intended to only be unlocked by the device designer. (bad code)
Example Language: Other
Attackers dump the code from the device and then perform reverse engineering to analyze the code. The undocumented, special-access features are identified, and attackers can activate them by sending specific commands via UART before secure-boot phase completes. Using these hidden features, attackers can perform reads and writes to memory via the UART interface. At runtime, the attackers can also execute arbitrary code and dump the entire memory contents.
Remove all chicken bits and hidden features that are exposed to attackers. Add authorization schemes that rely on cryptographic primitives to access any features that the manufacturer does not want to expose. Clearly document all interfaces. ![]()
CWE-1296: Incorrect Chaining or Granularity of Debug Components
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product's debug components contain incorrect chaining or granularity of debug components.
For debugging and troubleshooting a chip, several hardware design elements are often implemented, including:
Logic errors during design or synthesis could misconfigure the interconnection of the debug components, which could allow unintended access permissions. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1 The following example shows how an attacker can take advantage of incorrect chaining or missing granularity of debug components. In a System-on-Chip (SoC), the user might be able to access the SoC-level TAP with a certain level of authorization. However, this access should not also grant access to all of the internal TAPs (e.g., Core). Separately, if any of the internal TAPs is also stitched to the TAP chain when it should not be because of a logic error, then an attacker can access the internal TAPs as well and execute commands there. As a related example, suppose there is a hierarchy of TAPs (TAP_A is connected to TAP_B and TAP_C, then TAP_B is connected to TAP_D and TAP_E, then TAP_C is connected to TAP_F and TAP_G, etc.). Architecture mandates that the user have one set of credentials for just accessing TAP_A, another set of credentials for accessing TAP_B and TAP_C, etc. However, if, during implementation, the designer mistakenly implements a daisy-chained TAP where all the TAPs are connected in a single TAP chain without the hierarchical structure, the correct granularity of debug components is not implemented and the attacker can gain unauthorized access. Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Maintenance
This entry is still under development and will continue to see updates and content improvements.
CWE-1254: Incorrect Comparison Logic Granularity
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product's comparison logic is performed over a series of steps rather than across the entire string in one operation. If there is a comparison logic failure on one of these steps, the operation may be vulnerable to a timing attack that can result in the interception of the process for nefarious purposes.
Comparison logic is used to compare a variety of objects including passwords, Message Authentication Codes (MACs), and responses to verification challenges. When comparison logic is implemented at a finer granularity (e.g., byte-by-byte comparison) and breaks in the case of a comparison failure, an attacker can exploit this implementation to identify when exactly the failure occurred. With multiple attempts, the attacker may be able to guesses the correct password/response to challenge and elevate their privileges. ![]()
![]() ![]()
![]()
![]()
![]()
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
Maintenance
CWE 4.16 removed a demonstrative example for a hardware module because it was inaccurate and unable to be adapted. The CWE team is developing an alternative.
CWE-1292: Incorrect Conversion of Security Identifiers
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product implements a conversion mechanism to map certain bus-transaction signals to security identifiers. However, if the conversion is incorrectly implemented, untrusted agents can gain unauthorized access to the asset.
In a System-On-Chip (SoC), various integrated circuits and hardware engines generate transactions such as to access (reads/writes) assets or perform certain actions (e.g., reset, fetch, compute, etc.). Among various types of message information, a typical transaction is comprised of source identity (to identify the originator of the transaction) and a destination identity (to route the transaction to the respective entity). Sometimes the transactions are qualified with a security identifier. This security identifier helps the destination agent decide on the set of allowed actions (e.g., access an asset for read and writes). A typical bus connects several leader and follower agents. Some follower agents implement bus protocols differently from leader agents. A protocol conversion happens at a bridge to seamlessly connect different protocols on the bus. One example is a system that implements a leader with the Advanced High-performance Bus (AHB) protocol and a follower with the Open-Core Protocol (OCP). A bridge AHB-to-OCP is needed to translate the transaction from one form to the other. A common weakness that can exist in this scenario is that this conversion between protocols is implemented incorrectly, whereupon an untrusted agent may gain unauthorized access to an asset. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
Consider a system that supports AHB. Let us assume we have a follower agent that only understands OCP. To connect this follower to the leader, a bridge is introduced, i.e., AHB to OCP. The follower has assets to protect accesses from untrusted leaders, and it employs access controls based on policy, (e.g., AES-Key registers for encryption or decryption). The key is 128 bits implemented as a set of four 32-bit registers. The key registers are assets, and register AES_KEY_ACCESS_POLICY is defined to provide the necessary access controls. The AES_KEY_ACCESS_POLICY access-policy register defines which agents with a security identifier in the transaction can access the AES-key registers. The implemented AES_KEY_ACCESS_POLICY has 4 bits where each bit when "Set" allows access to the AES-Key registers to the corresponding agent that has the security identifier. The other bits from 31 through 4 are reserved and not used.
During conversion of the AHB-to-OCP transaction, the security identifier information must be preserved and passed on to the follower correctly. (bad code)
Example Language: Other
In AHB-to-OCP bridge, the security identifier information conversion is done incorrectly.
Because of the incorrect conversion, the security identifier information is either lost or could be modified in such a way that an untrusted leader can access the AES-Key registers. (good code)
Example Language: Other
The conversion of the signals from one protocol (AHB) to another (OCP) must be done while preserving the security identifier correctly.
![]()
CWE-1290: Incorrect Decoding of Security Identifiers
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product implements a decoding mechanism to decode certain bus-transaction signals to security identifiers. If the decoding is implemented incorrectly, then untrusted agents can now gain unauthorized access to the asset.
In a System-On-Chip (SoC), various integrated circuits and hardware engines generate transactions such as to access (reads/writes) assets or perform certain actions (e.g., reset, fetch, compute, etc.). Among various types of message information, a typical transaction is comprised of source identity (to identify the originator of the transaction) and a destination identity (to route the transaction to the respective entity). Sometimes the transactions are qualified with a security identifier. The security identifier helps the destination agent decide on the set of allowed actions (e.g., access an asset for read and writes). A decoder decodes the bus transactions to map security identifiers into necessary access-controls/protections. A common weakness that can exist in this scenario is incorrect decoding because an untrusted agent's security identifier is decoded into a trusted agent's security identifier. Thus, an untrusted agent previously without access to an asset can now gain access to the asset. ![]()
![]() ![]()
![]()
![]()
![]()
Example 1
Consider a system that has four bus masters and a decoder. The decoder is supposed to decode every bus transaction and assign a corresponding security identifier. The security identifier is used to determine accesses to the assets. The bus transaction that contains the security information is Bus_transaction [15:14], and the bits 15 through 14 contain the security identifier information. The table below provides bus masters as well as their security identifiers and trust assumptions:
The assets are the AES-Key registers for encryption or decryption. The key is 128 bits implemented as a set of four 32-bit registers. The AES_KEY_ACCESS_POLICY is used to define which agents with a security identifier in the transaction can access the AES-key registers. The size of the security identifier is 4 bits (i.e., bit 3 through 0). Each bit in these 4 bits defines a security identifier. There are only 4 security identifiers that are allowed accesses to the AES-key registers. The number of the bit when set (i.e., "1") allows respective action from an agent whose identity matches the number of the bit. If clear (i.e., "0"), disallows the respective action to that corresponding agent.
The following Pseudo code outlines the process of checking the value of the Security Identifier within the AES_KEY_ACCESS_POLICY register: (informative)
Example Language: Other
If (AES_KEY_ACCESS_POLICY[Security_Identifier] == "1")
Allow access to AES-Key registers
Else
Deny access to AES-Key registers
Below is a decoder's Pseudo code that only checks for bit [14] of the bus transaction to determine what Security Identifier it must assign. (bad code)
Example Language: Other
If (Bus_transaction[14] == "1")
Security_Identifier == "1"
Else
Security_Identifier == "0"
The security identifier is two bits, but the decoder code above only checks the value of one bit. Two Masters have their bit 0 set to "1" - Master_1 and Master_3. Master_1 is trusted, while Master_3 is not. The code above would therefore allow an untrusted agent, Master_3, access to the AES-Key registers in addition to intended trusted Master_1.
(good code)
Example Language: Other
If (Bus_transaction[15:14] == "00")
Security_Identifier == "0"
If (Bus_transaction[15:14] == "01")
Security_Identifier == "1"
If (Bus_transaction[15:14] == "10")
Security_Identifier == "2"
If (Bus_transaction[15:14] == "11")
Security_Identifier == "3"
![]()
CWE-276: Incorrect Default Permissions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDuring installation, installed file permissions are set to allow anyone to modify those files.
![]()
![]() ![]()
![]()
![]()
![]()
![]()
![]()
![]()
Note: this is a curated list of examples for users to understand the variety of ways in which this weakness can be introduced. It is not a complete list of all CVEs that are related to this CWE entry.
![]()
|